{"id":137,"date":"2015-01-25T20:42:47","date_gmt":"2015-01-25T11:42:47","guid":{"rendered":"http:\/\/ssl.kw.ac.kr\/blog\/?page_id=137"},"modified":"2025-07-01T13:28:03","modified_gmt":"2025-07-01T04:28:03","slug":"papers","status":"publish","type":"page","link":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en","title":{"rendered":"Papers"},"content":{"rendered":"\n<p><strong>SCI, SCI-E, SCOPUS Indexed Journals<\/strong><br><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SCI) Donghyeon Kim, Kyoung Ryul Lee, Dong Seok Lim, Kwang Hyun Lee, Jong Seon Lee,<br>Dae-Yeol Kim* &amp; Chae-Bong Sohn* &#8220;<a href=\"https:\/\/www.nature.com\/articles\/s41598-025-92582-9\">A novel hybrid CNN-transformer model for arrhythmia detection without R-peak identification using stockwell transform<\/a>&#8220;, Nature Scientific Reports 15.7817 (2025): 1-11. (IF 3.8 Q1)<\/em><\/p>\n\n\n\n<pre class=\"wp-block-verse has-small-font-size\"><strong>Abstract:<\/strong><br>This study presents a novel hybrid deep learning model for arrhythmia classification from electrocardiogram signals, utilizing the stockwell transform for feature extraction. As ECG signals are time-series data, they are transformed into the frequency domain to extract relevant features. Subsequently, a CNN is employed to capture local patterns, while a transformer architecture learns long-term dependencies. Unlike traditional CNN-based models that require R-peak detection, the proposed model operates without it and demonstrates superior accuracy and efficiency. The findings contribute to enhancing the accuracy of ECG-based arrhythmia diagnosis and are applicable to real-time monitoring systems. Specifically, the model achieves an accuracy of 97.8% on the Icentia11k dataset using four arrhythmia classes and 99.58% on the MIT-BIH dataset using five arrhythmia classes.<\/pre>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SICE) Yeon-Ji Park, Geun-Je Yang, Chae-Bong Sohn* and Soo Jun Park* \u201c<a href=\"https:\/\/doi.org\/10.1186\/s12859-024-05710-z\">GPDminer: a tool for extracting named entities and analyzing relations in biological literature<\/a>\u201d, BMC Bioinformatics 25.101 (2024): 1-18. (IF 3.0 Q2)<\/em><\/p>\n<cite><strong>Abstract<\/strong>: <br>Purpose: The expansion of research across various disciplines has led to a substantial increase in published papers and journals, highlighting the necessity for reliable text mining platforms for database construction and knowledge acquisition. This abstract introduces GPDMiner(Gene, Protein, and Disease Miner), a platform designed for the biomedical domain, addressing the challenges posed by the growing volume of academic papers.<br>Methods: GPDMiner is a text mining platform that utilizes advanced information retrieval techniques. It operates by searching PubMed for specific queries, extractingand analyzing information relevant to the biomedical field. This system is designed to discern and illustrate relationships between biomedical entities obtained from automated information extraction.<br>Results: The implementation of GPDMiner demonstrates its efficacy in navigatingthe extensive corpus of biomedical literature. It efficiently retrieves, extracts, and analyzes information, highlighting significant connections between genes, proteins,and diseases. The platform also allows users to save their analytical outcomes in various formats, including Excel and images.<br>Conclusion: GPDMiner offers a notable additional functionality among the arrayof text mining tools available for the biomedical field. This tool presents an effective solution for researchers to navigate and extract relevant information from the vast unstructured texts found in biomedical literature, thereby providing distinctive capabilities that set it apart from existing methodologies. Its application is expected to greatly benefit researchers in this domain, enhancing their capacity for knowledge discovery and data management.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SCIE) Do-Yup Kim , Chae-Bong Sohn , and Hyun-Suk Lee &#8220;Dynamic Joint Scheduling of Anycast Transmission and Modulation in Hybrid Unicast-Multicast SWIPT-Based IoT Sensor Networks&#8221;, IEEE Sensors Journal 23.24 (2023): 31345-31358. (IF 4.3 Q1)<\/em><\/p>\n<cite><strong>Abstract<\/strong>: Simultaneous wireless information and power transfer (SWIPT) technologies are vital in powering Internetof-Things (IoT) sensor networks. Despite their importance, the traditionally used separate receiver (SR) architecture with a time- or power-splitting (TS\/PS) mode in SWIPT usually results in high energy consumption, especially during the information decoding (ID) process due to energy-intensive local oscillators and mixers. To overcome this, an integrated receiver (IR) architecture has been introduced, sparking the development of compatible SWIPT modulation schemes. However, the aspect of modulation scheduling for IR architecture in SWIPT-based IoT sensor networks appears to be little explored. This article bridges this research gap by proposing a joint unicast\/multicast, IoT sensor, and modulation (UMSM) scheduling algorithm. We use mathematical modeling and optimization methods to maximize the weighted sum of average unicast service throughput and energy harvested by IoT sensors, while ensuring minimal average throughput for both unicast and multicast services, along with the minimum average harvested energy. Our simulation results demonstrate the effectiveness of our algorithm in improving energy harvesting (EH) and throughput performance while maintaining necessary constraints.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SCIE) Yeon-Ji Park, Min-a Lee, Geun-Je Yang, Soo Jun Park* and Chae-Bong Sohn* \u201cWeb Interface of NER and RE with BERT for Biomedical Text Mining\u201d, Applied Sciences 13.5163 (2023): 1-11. (IF 2.838 Q2)<\/em><\/p>\n<cite><strong>Abstract<\/strong>: The BioBERT Named Entity Recognition (NER) model is a high-performance model designed to identify both known and unknown entities. It surpasses previous NER models utilized by text-mining tools, such as tmTool and ezTag, in effectively discovering novel entities. In previous studies, the Biomedical Entity Recognition and Multi-Type Normalization Tool (BERN) employed this model to identify words that represent specific names, discern the type of the word, and implement it on a web page to offer NER service. However, we aimed to offer a web service that includes Relation Extraction (RE), a task determining the relation between entity pairs within a sentence. First, just like BERN, we fine-tuned the BioBERT NER model within the biomedical domain to recognize new entities. We identified two categories: diseases and genes\/proteins. Additionally, we fine-tuned the BioBERT RE model to determine the presence or absence of a relation between the identified gene-disease entity pairs. The NER and RE results are displayed on a web page using the Django web framework. NER results are presented in distinct colors, and RE results are visualized as graphs in NetworkX and Cytoscape, allowing users to interact&nbsp;<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SCIE) Dae-Yeol Kim, Soo-Young Cho, Kwangkee Lee* and Chae-Bong Sohn* \u201cA Study of Projection-Based Attentive Spatial-Temporal Map for Remote Photoplethysmography Measurement\u201d, Bioengineering 9.638 (2022): 1-14. (IF 5.046 Q2)<\/em><\/p>\n<cite>Abstract: The photoplethysmography (PPG) signal contains various information that is related to CVD (cardiovascular disease). The remote PPG (rPPG) is a method that can measure a PPG signal using a face image taken with a camera, without a PPG device. Deep learning-based rPPG methods can be classified into three main categories. First, there is a 3D CNN approach that uses a facial image video as input, which focuses on the spatio-temporal changes in the facial video. The second approach is a method that uses a spatio-temporal map (STMap), and the video image is pre-processed using the point where it is easier to analyze changes in blood flow in time order. The last approach uses a preprocessing model with a dichromatic reflection model. This study proposed the concept of an axis projection network (APNET) that complements the drawbacks, in which the 3D CNN method requires significant memory; the STMap method requires a preprocessing method; and the dyschromatic reflection model (DRM) method does not learn long-term temporal characteristics. We also showed that the proposed APNET effectively reduced the network memory size, and that the low-frequency signal was observed in the inferred PPG signal, suggesting that it can provide meaningful results to the study when developing the rPPG algorithm.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>(SCIE) Yeon-Ji Park, Min-a Lee, Geun-Je Yang, Soo Jun Park* and Chae-Bong Sohn* \u201cBiomedical Text NER Tagging Tool with Web Interface for Generating BERT-Based Fine-Tuning Dataset\u201d, Applied Sciences 12.12012 (2022): 1-13. (IF 2.838 Q2)<\/em><\/p>\n<cite><strong>Abstract<\/strong>: In this paper, a tagging tool is developed to streamline the process of locating tags for each term and manually selecting the target term. It directly extracts the terms to be tagged from sentences and displays it to the user. It also increases tagging efficiency by allowing users to reflect candidate categories in untagged terms. It is based on annotations automatically generated using machine learning. Subsequently, this architecture is fine-tuned using Bidirectional Encoder Representations from Transformers (BERT) to enable the tagging of terms that cannot be captured using Named-Entity Recognition (NER). The tagged text data extracted using the proposed tagging tool can be used as an additional training dataset. The tagging tool, which receives and saves new NE annotation input online, is added to the NER and RE web interfaces using BERT. Annotation information downloaded by the user includes the category (e.g., diseases, genes\/proteins) and the list of words associated to the named entity selected by the user. The results reveal that the RE and NER results are improved using the proposed web service by collecting more NE annotation data and fine-tuning the model using generated datasets. Our application programming interfaces and demonstrations are available to the public at via the website link provided in this paper.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Dae-Yeol Kim, Kwangkee Lee* and Chae-Bong Sohn*, &#8220;Assessment of ROI Selection for Facial Video-Based rPPG&#8221;, Sensors 21.7923 (2021): 1-15. (IF 3.576 Q1)<\/p>\n<cite><strong>Abstract<\/strong>: In general, facial image-based remote photoplethysmography (rPPG) methods use colorbased and patch-based region-of-interest (ROI) selection methods to estimate the blood volume pulse (BVP) and beats per minute (BPM). Anatomically, the thickness of the skin is not uniform in all areas of the face, so the same diffuse reflection information cannot be obtained in each area. In recent years, various studies have presented experimental results for their ROIs but did not provide a valid rationale for the proposed regions. In this paper, to see the effect of skin thickness on the accuracy of the rPPG algorithm, we conducted an experiment on 39 anatomically divided facial regions. Experiments were performed with seven algorithms (CHROM, GREEN, ICA, PBV, POS, SSR, and LGI) using the UBFC-rPPG and LGI-PPGI datasets considering 29 selected regions and two adjusted regions out of 39 anatomically classified regions. We proposed a BVP similarity evaluation metric to find a region with high accuracy. We conducted additional experiments on the TOP-5 regions and BOT-5 regions and presented the validity of the proposed ROIs. The TOP-5 regions showed relatively high accuracy compared to the previous algorithm\u2019s ROI, suggesting that the anatomical characteristics of the ROI should be considered when developing a facial image-based rPPG algorithm<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Soo-Young Cho, Dae-Yeol Kim, Su-Yeong Oh and Chae-Bong Sohn*, &#8220;Reducing System Load of Effective Video Using a Network Model&#8221;, Applied Sciences 11.9665 (2021): 1-18. (IF 2.679 Q2)<\/p>\n<cite><strong>Abstract:<\/strong> Recently, as non-face-to-face work has become more common, the development of streaming services has become a significant issue. As these services are applied in increasingly diverse fields, various problems are caused by the overloading of systems when users try to transmit high-quality images. In this paper, SRGAN (Super Resolution Generative Adversarial Network) and DAIN (Depth-Aware Video Frame Interpolation) deep learning were used to reduce the overload that occurs during real-time video transmission. Images were divided into a FoV (Field of view) region and a non-FoV (Non-Field of view) region, and SRGAN was applied to the former, DAIN to the latter. Through this process, image quality was improved and system load was reduced.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Yoojin Jeong, and Chae-Bong Sohn*, &#8220;Readily Design and Try-On Garments by Manipulating Segmentation Images&#8221;, Electronics 9.1553 (2020): 1-11. (IF 2.412 Q2)<\/p>\n<cite><strong>Abstract<\/strong>: Recently, fashion industries have introduced artificial intelligence to provide new services, and research to combine fashion design and artificial intelligence has been continuously conducted. Among them, generative adversarial networks that synthesize realistic-looking images have been widely applied in the fashion industry. In this paper, a new apparel image is created using a generative model that can apply a new style to a desired area in a segmented image. It also creates a new fashion image by manipulating the segmentation image. Thus, interactive fashion image manipulation, which enables users to edit images by controlling segmentation images, is possible. This allows people to try new styles without the pain of inconvenient travel or changing clothes. Furthermore, they can easily determine which color and pattern suits the clothes they wear more, or whether the clothes other people wear match their clothes. Therefore, user-centered fashion design is possible. It is useful for virtually trying on or recommending clothes.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Chan-Il Park, and Chae-Bong Sohn*, &#8220;Data Augmentation for Human Keypoint Estimation Deep Learning based Sign Language Translation&#8221;, Electronics 9.1257 (2020): 1-9. (IF 2.412 Q2)<\/p>\n<cite><strong>Abstract<\/strong>: Deep learning technology has developed constantly and is applied in many fields. In order to correctly apply deep learning techniques, su\u000ecient learning must be preceded. Various conditions are necessary for su\u000ecient learning. One of the most important conditions is training data. Collecting su\u000ecient training data is fundamental, because if the training data are insu\u000ecient, deep learning will not be done properly. Many types of training data are collected, but not all of them. So, we may have to collect them directly. Collecting takes a lot of time and hard work. To reduce this e\u000bort, the data augmentation method is used to increase the training data. Data augmentation has some common methods, but often requires di\u000berent methods for specific data. For example, in order to recognize sign language, video data processed with openpose are used. In this paper, we propose a new data augmentation method for sign language data used for learning translation, and we expect to improve the learning performance, according to the proposed method.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Tegg Taekyong Sung, Jeongsoo Ha, Jeewoo Kim, Alex Yahja, Chae-Bong Sohn*, and Bo Ryu, &#8220;DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling&#8221;, Electronics 9.936 (2020): 1-16. (IF 2.412 Q2)<\/p>\n<cite><strong>Abstract<\/strong>: In this paper, we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by a directed acyclic graph. Traditionally, heuristic algorithms have been widely used for many resource scheduling domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating state-of-the-art technique across a broad range of heterogeneous resource scheduling domains over many years. Despite their long-standing popularity, HEFT-like algorithms are known to be vulnerable to a small amount of noise added to the environment. Our Deep Reinforcement Learning (DRL)-based SoC Scheduler (DeepSoCS), capable of learning the \u201cbest\u201d task ordering under dynamic environment changes, overcomes the brittleness of rule-based schedulers such as HEFT with significantly higher performance across different types of jobs. We describe a DeepSoCS design process using a real-time heterogeneous SoC scheduling emulator, discuss major challenges, and present two novel neural network design features that lead to outperforming HEFT: (i) hierarchical job- and task-graph embedding; and (ii) efficient use of real-time task information in the state space. Furthermore, we introduce effective techniques to address two fundamental challenges present in our environment: delayed consequences and joint actions. Through an extensive simulation study, we show that our DeepSoCS exhibits the significantly higher performance of job execution time than that of HEFT with a higher level of robustness under realistic noise conditions. We conclude with a discussion of the potential improvements for our DeepSoCS neural scheduler.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Eunsu Goh, Daeyeol Kim, Suyeong Oh, and Chae-Bong Sohn*, &#8220;Automatic Effect Generation Method for 4D Films&#8221;, International Journal of Computing and Digital Systems, 9.2 (2020): 291-298.<\/p>\n<cite><strong>Abstract<\/strong>: The 4D film is a technology that stimulates the viewer&#8217;s senses by using motion chairs and special equipment to increase immersion. 4D movies have recently gained enormous popularity by satisfying the five senses of users by using water spray and wind scent of motion chairs. Recently, efforts have been made to apply 4D systems to personal equipment such as mobile devices. However, to create 4D content that can be used on 4D devices, a large number of skilled workers have to make manual effects for several decades. In this paper, we propose a method of generating 4d effects by classifying audio signals and motion of important objects in video using 4D movie\u2019s program stream.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Yoojin Jeong, Kyoung Chul Kim, Kwang-Chul Son, and Chae-Bong Sohn*, &#8220;A Hanbok Design and Improve the Results using GAN&#8221;, International Journal of Engineering Research and Technology, 12.12 (2020): 3038-3040.<\/p>\n<cite><strong>Abstract<\/strong>: In this study, Generative adversarial network (GAN) was used to design Korean traditional clothes, Hanbok. Style transfer methods are used to create Hanbok images based on contour images of Hanbok by learning domain translation between color domain and edge domain with GAN algorithm. Among the Style transfer methods, DiscoGAN was used. Furthermore, CycleGAN and SRGAN were used to improve the resulted images of DiscoGAN.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Sooyoung Cho, Daeyeol Kim, Sinwoo Yoo, Kyunghak Lee, Chae-Bong Sohn. &#8220;Automatic Music Selection Algorithm Based on Background Image.&#8221; International Journal of Innovative Technology and Exploring Engineering (IJITEE) 8.8S2 (2019): 332-335.<\/p>\n<cite><strong>Abstract Background\/Objectives<\/strong>: Game music has the characteristic in which determined music is repeated according to the area in the game.<br><strong>Methods\/Statistical analysis<\/strong>: In this paper, we propose an algorithm in which various music is repeated in game. The game background is extracted to the image by utilizing the screen-shot function. First, gave the histogram of similar images. The classification of the background is determined using the learned histogram, and one of the music corresponding to the tag created by the user is reproduced.<br><strong>Findings<\/strong>: For each image, a histogram was determined. RGB and lab histograms are represented through the table. As a result, you can see that game screenshots and other images were judged to be similar images when they were entered.<br><strong>Improvements\/Applications<\/strong>: It can be used for video processing and other editing functions. Learning through algorithms can be used in many ways.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Sang-Geun Choi, and Chae-Bong Sohn. &#8220;Detection of HGG and LGG Brain Tumors using U-Net.&#8221; <em>Medico-Legal Update<\/em>&nbsp;19.1 (2019): 560-565.<\/p>\n<cite><strong>Background\/Objectives<\/strong>: Advancement in medical equipment has enabled accurate and quick diagnosis in medical field. However, an increase in the number of medical staff is slower than the rate of medical equipment development. It has resulted in increased risk of diagnostic misinterpretation. The purpose of this paper is to help diagnosis of medical staff through artificial neural network (ANN).<br><strong>Methods\/Statistical analysis<\/strong>: We selected U-Net among artificial neural networks. U-Net is highly accurate in medical imaging. The dataset for learning the network was obtained from the Brain Tumor Segmentation Challenge (BraTS). This dataset contains four classes of brain tumor data and it is suitable for learning variety of brain tumors. We used F-Score to measure the accuracy of the learned network.<br><strong>Findings: <\/strong>In this paper, we compare the performance of the network by conducting two experiments. First, we checked the learning progress of the network. Second, we compared the results of learning with mixed and single datasets. In the first experiment, when allowing the network to learn for a total of 200 generations, it was confirmed that the results of 100 generations were the most accurate. In the second experiment, the network learned by three groups of datasets. The first group consisted of HGG data only, and the second group was composed of LGG data only, and the last group was made up of mixing HGG and LGG data. When comparing the results of the first group with the third group, the accuracy of HGG patient was 0.6696 and 0.6222, respectively. Subsequently, the results of the second and the third group were 0.6315 and 0.6228, respectively.<br><strong>Improvements\/Applications<\/strong>: In this experiment, we compared the results obtained when the datasets were mixed and when they were used singly. The results show similar accuracy. However, when using a mixture of datasets, the accuracy is lower, which is enough to assist the diagnosis of the medical staff. It is expected that this will help the development of the medical image processing field by confirming the position and size of the brain tumor accurately regardless of the data of any grade for brain tumor.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Sooyoung Cho, Daeyeol Kim, Sinwoo Yoo, and Chae-Bong Sohn. &#8220;Generative Adversarial Network-Based Face Recognition Dataset Generation.&#8221; International Journal of Applied Engineering Research 13.22 (2018):15734-15739<\/p>\n<cite><strong>Abstract<\/strong>: Facial recognition technique has many advantages than other biometric recognition solutions and recent studies and outcomes of automation process level almost the same as a human does. Applying Deep-Learning solution in this region is very common these days, but there are many obstacles to put in. This paper deals one of them of which the preparation of a certain scale of a dataset by combining existing dataset and another dataset this paper suggests. Celeb A and 2nd version of VGG face dataset are the base dataset that the discriminator agent of Generative Adversarial Network can be trained, and the generator will refer the new dataset with thousands of western portraits we added. This suggested new dataset is tested with Deep-face network as the one of existing facial recognition solutions, and we confirmed that we can use this technique for other similar dataset preprocessing layers. There are some facts to need to consider when it applies to other targets, as analyzed differences between the real facial pictures and the ones was generated.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Minyeong Gwon, Eunsu Goh and Chae-Bong Sohn. &#8220;The VR Trip Simulator with Multi Networking of Rule-based Model.&#8221; International Journal of Applied Engineering Research 13.22 (2018):15754-15757<\/p>\n<cite><strong>Abstract<\/strong>: Unity 3D tools (\u2018Unity\u2019) can be used to develop VR applications that can simulate various environments. In this paper, we develop VR Trip Simulator (&#8216;Simulator&#8217;) for the purpose of travel. The simulator to be introduced in this paper was developed based on Rule Based Model. Rule Based Engine is added to form a State &#8211; Rule &#8211; Action structure for various models. The NPC AI, which is developed by using it, takes various actions appropriate to the situation. Simulation is carried out, the information related to the destination is automatically provided to the user, and the practicality of the simulator is enhanced. In addition, by establishing networking in the TCP \/ IP communication environment, it communicates with various users in real time. This increases the expertise of network programming and makes up the funniest element in the simulator, not just information.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Sooyoung Cho, Daeyeol Kim, Changhyung Kim, Kyoung-Yoon Jeong &amp; Chae-Bong Sohn. &#8220;360-degree video traffic reduction using cloud streaming in mobile.&#8221; <em>Wireless Personal Communications <\/em>105.2 (2018): 635-654.<\/p>\n<cite><strong>Abstract<\/strong>: Recently, commercialization of 360\u00b0 video streaming service and various studies are being conducted in a mobile environment. It also makes 360\u00b0 video streaming service through a mobile cloud available. The mobile cloud can easily provide characteristics of the terminal and existing features of the cloud service, such as contents and service of the application, in the mobile environment. Using this function, it is applied to 360\u00b0 video streaming services in mobile environment. Unlike the conventional filming method which only shows the angle that camera operator intended, 360\u00b0 videos can display the direction desired by the viewer in real time by recording a view in every direction at the same time. By displaying the real-time 3D image information to the user, the viewer can have more realistic contents and interactive experience. 360\u00b0 video typically has a resolution of 4 k or more, which causes network load in mobile streaming. The adaptive HTTP streaming service currently provides 360 resolution video streams in proportion to their bandwidth. However, this method is not responsible for the quality of the video. Therefore, we propose a highquality video streaming method with low network load in mobile environment. 360 video is divided into FoV (field of view) and non-FoV, and the image is transmitted with high quality for FoV and low quality for non-FoV. In this paper, we propose a method of FoV on the background frame (FBF) differentiated from the existing Http adaptive streaming method. It is possible to view high resolution video in a mobile environment while maintaining a regular level of video quality even in a non-viewing area.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Tegg Taekyong Sung, Changhyung Kim, Kyunghak Lee and Chae-Bong Sohn. &#8220;Exploring Navigation using Deep Reinforcement Learning.&#8221; International Journal of Applied Engineering Research 13.19 (2018):14447-14450<\/p>\n<cite><strong>Abstract<\/strong>: This paper discusses a navigation system with deep reinforcement learning approach. Reinforcement learning maximizes designed reward function and can be applied diverse domains, such as vision, language, or robotics. Especially, one of the methods, model-free learns how to maximize the objective without achieving any environment information as a trial-and-error. We review recent methodologies of navigation using reinforcement learning and discuss the impact of different observation spaces from the agent. Furthermore, we experiment the navigating robot using the model-free algorithm and a physical simulator.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Daeyeol Kim, Tegg Taekyong Sung, SooYoung Cho, Gyunghak Lee and Chae-Bong Sohn. &#8220;A Single Predominant Instrument Recognition of Polyphonic Music Using CNN-based Timbre Analysis.&#8221; International Journal of Engineering &amp; Technology, 7 (3.34) (2018): 590-593<\/p>\n<cite><strong>Abstract<\/strong>: Classifying musical instrument from polyphonic music is a challenging but important task in music information retrieval. This work enables to automatically tag music information, such as genre classification. In previous, almost every work of spectrogram analysis has been used Short Time Fourier Transform (STFT) and Mel Frequency Cepstral Coefficient (MFCC). Recently, sparkgram is researched and used in audio source analysis. Moreover, for deep learning approach, modified convolutional neural networks (CNN) widely have been researched, but many results have not been improved drastically. Instead of improving backbone networks, we have researched on preprocessing process.<br>In this paper, we use CNN and Hilbert Spectral Analysis (HSA) to solve the polyphonic music problem. The HSA is performed at the fixed length of polyphonic music, and a predominant instrument is labeled at its result. We have achieved the state-of-the-art result in IRMAS dataset and 3% performance improvement in individual instruments.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Sooyoung Cho, Sang-Geun Choi, Daeyeol Kim, Gyunghak Lee and Chae-Bong Sohn. &#8220;How to Generate Image Dataset based on 3D Model and Deep Learning Method.&#8221; International Journal of Engineering &amp; Technology, 7 (3.34) (2018): 221-225<\/p>\n<cite><strong>Abstract<\/strong>: Performances of computer vision tasks have been drastically improved after applying deep learning. Such object recognition, object segmentation, object tracking, and others have been approached to the super-human level. Most of the algorithms were trained by using supervised learning. In general, the performance of computer vision is improved by increasing the size of the data. The collected data was labeled and used as a data set of the YOLO algorithm. In this paper, we propose a data set generation method using Unity which is one of the 3D engines. The proposed method makes it easy to obtain the data necessary for learning. We classify 2D polymorphic objects and test them against various data using a deep learning model. In the classification using CNN and VGG-16, 90% accuracy was achieved. And we used Tiny-YOLO of YOLO algorithm for object recognition and we achieved 78% accuracy. Finally, we compared in terms of virtual and real environments it showed a result of 97 to 99 percent for each accuracy.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Jin Sol Choi, Daeyeol Kim, Sooyoung Cho, Sinwoo Yoo and Chae-Bong Sohn. &#8220;Visual Speech Recognition System with Deep Neural Networks.&#8221; International Journal of Applied Engineering Research 13.15 (2018): 12073-12076<\/p>\n<cite><strong>Abstract<\/strong>: Recent artificial intelligence manufactures based on voice recognition cannot be used by the deaf. In order to solve this problem, we present \u2018Visual Speech Recognition System\u2019 using deep learning with lip movement. This system analyzes mouth shape and process time series data through the 3-dimensional convolutional neural network and gated recurrent unit. Our visual speech recognition system deals with Korean vocabulary, and creates subtitles based on oral movements of the subjects in the video. This system recognizes individual words rather than the whole sentences. We achieved 91.8% accuracy. This system could be applicable for someone who being deaf, having the difficulty of hearing, or anyone who requires communication without the voice.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Jin Sol Choi, Daeyeol Kim, Sooyoung Cho and Chae-Bong Sohn. &#8220;Deep Learning-Based Lip Analysis System.&#8221; JP Journal of Heat and Mass Transfer SP.1 (2018): 29-33<\/p>\n<cite><strong>Abstract<\/strong>: Recent artificial intelligence manufactures based on voice recognition cannot be used by the deaf. In order to solve this problem, we present \u2018Lip Analysis System\u2019 using deep learning with lip movement. This system analyzes mouth shape and process time series data through the 3-dimensional convolution neural network and gated recurrent unit. Our Lip Analysis System deals with Korean vocabulary, and creates subtitles based on oral movements of the subjects in the video. This system recognizes individual words rather than the whole sentences. We achieved 91.8% accuracy. This system could be applicable for someone who being deaf, having the difficulty of hearing, or anyone who requires communication without the voice.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Tegg Taekyong Sung, Daeyeol Kim, Soo Jun Park, and Chae-Bong Sohn. &#8220;Dropout Acts as Auxiliary Exploration&#8221;, International Journal of Applied Engineering Research 13.10 (2018): 7977-7982<\/p>\n<cite><strong>Abstract<\/strong>: Deep neural networks have successfully been used in machine learning field, and scientists have been experimented that one of its methods, reinforcement learning is corresponded to the functions of basal ganglia in the brain. One of the critical issues in reinforcement learning is performing the optimal action for an agent. Commonly, this can be achieved by balancing between exploitation and exploration. Recently, dropout, one of the stochastic regularization methods, can be worked for discovering exploration. In this paper, we extend dropout as an auxiliary exploration in reinforcement learning, especially in continuous action problems. This method can be easily applied to any algorithms involving function approximator. We have empirically found the optimal dropout rates and position from layers in neural networks. Comparing to standard networks, dropout applied layers achieved higher rewards in most control tasks. Moreover, we suggest a promising methodology for developing dropout method using the probabilistic switch. With its probabilistic behavior, this can be attached to neuromorphic chip to perform dropout.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Changhyung Kim and Chae-Bong Sohn. &#8220;Smart Home AMI Service by IoT in DTV Channel.&#8221; Far East Journal of Electronics and Communications 17.4 (2017): 801-806<\/p>\n<cite><strong>Abstract<\/strong>: AMI and smart home service based on IoT are variously provided by using new information communication devices such as smart phone and Internet TV. However, it is costly and time consuming due to the inconvenience that a user has to learn how to use a new IT (information telecommunication) device. In this respect, TV is as stable, standard, and familiar household appliance as many people have used for a long time. Especially at home, TV is more utilized than any other IT devices in terms of users experience, penetration rate, and industry standard. However, traditional TV has many limitations in using IoT service. This paper suggests a system that uses IoT service on traditional TV like smart home service and AMI by transmission DTV (Digital Television) broadcast channels. Therefore, IoT service can be used in traditional TV channels.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Changhyung Kim, Tae Kyung Sung, Kyung Chul Kim, Kyung Yoon Jeong, Seong Jeong&nbsp; and Chae-Bong Sohn. &#8220;Low Delay Method for PSIP Information Converter and Transmission in ATSC Digital Broadcast.&#8221; Far East Journal of Electronics and Communications SP.2 (2017): 123-129<\/p>\n<cite><strong>Abstract<\/strong>: Traditional broadcasters who had provided the analog broadcast services have moved to digital broadcasting services due to the economic breakthrough of digital broadcasting. However, local MSO (Multi-System Operator) needs various service methods, such as changing virtual channel or reconstructing PSIP, to retransmit the digital broadcast. MPEG-2 TS (Transport Stream) which is transferred from the terrestrial digital broadcasting has various PSIP (Program and System Information Protocol). In this paper, we suggest a new method to convert the PSIP information of the MPEG-2 TS with low delay by receiving a digital terrestrial broadcast via the PSIP (Program and System Information Protocol) analysis.<\/cite><\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCOPUS) Jung-Ho Kim, Yong-Suk Choi, Soon-Chul Kwon, Kwang-Chul Son, Chae-Bong Sohn and Seung-Hyun Lee. &#8220;The Influence on Changes of Visual Function by Watching 3D Images &#8211; Focused on Blink Rate and Accommodative Response -.&#8221; INFORMATION 17.12(B) (2014): 6589-6597<\/p>\n\n\n\n<p>(SCOPUS) Jung-Ho Kim, Soon Chul Kwon, Kwang Chul Son, Chae-Bong Sohn and Seung Hyun Lee2. &#8220;Effect of 2Dimesion and 3Dimension Images on Human Factors.&#8221; International Journal of Internet, Broadcasting and Communication 6.2 (2014): 13-16<\/p>\n\n\n\n<p>(SCOPUS) Kwang-Chul Son, Soon-Chul Kwon, Hyung-Won Jung, Chae-Bong Sohn, &#8220;The Characteristics of the Crystal of CdSe thin films fabricated by electochemical techniques&#8221;, Life Science Journal, Vol. 11, No. 7s, 2014<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Hye Jeong Cho, Chae-Bong Sohn and Seoung-Jun Oh, &#8220;Video Content-Based Bit Rate Estimation Scheme for Transcoding in IPTV Services&#8221;, KSII TIIS, Vol. 8, No. 3, 2014<\/p>\n<cite><strong>Abstract<\/strong>: In this paper, a new bit rate estimation scheme is proposed to determine the bit rate for each subclass in an MPEG-2 TS to H. 264\/AVC transcoder after dividing an input MPEG-2 TS sequence into several subclasses. Video format transcoding in conventional IPTV and Smart TV services is a time-consuming process since the input sequence should be fully transcoded several times with different bit-rates to decide the bit-rate suitable for a service. The proposed scheme can automatically decide the bit-rate for the transcoded video sequence in those services which can be stored on a video streaming server as small as possible without losing any subject quality loss. In the proposed scheme, an input sequence to the transcoder is sub-classified by hierarchical clustering using a parameter value extracted from each frame. The candidate frames of each subclass are used to estimate the bit rate using a statistical analysis and a mathematical model. Experimental results show that the proposed scheme reduces the bit rate by, on an average approximately 52% in low-complexity video and 6% in high-complexity video with negligible degradation in subjective quality.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCI) Soo Young Cho, Jin Choul Chai, Soo Jun Park, Hyemyung Seo, Chae-Bong Sohn, and Young Seek Lee, &#8220;EPITRANS: A Database that Integrates Epigenome and Transcriptome Data&#8221;, Molecules and Cells&#8221;, Vol. 36, No. 5, 2013<\/p>\n<cite><strong>Abstract<\/strong>: Epigenetic modifications affect gene expression and thereby govern a wide range of biological processes such as differentiation, development and tumorigenesis. Recent initiatives to define genome-wide DNA methylation and histone modification profiles by microarray and sequencing methods have led to the construction of databases. These databases are repositories for international epigenetic consortiums or provide mining results from PubMed, but do not integrate the epigenetic information with gene expression changes. In order to overcome this limitation, we constructed EPITRANS, a novel database that visualizes the relationships between gene expression and epigenetic modifications. EPITRANS uses combined analysis of epigenetic modification and gene expression to search for cell function-related epigenetic and transcriptomic alterations (Freely available on the web at http:\/\/epitrans.org).<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(SCIE) Sea-Nae Park, Dong-Gyu Sim, Seoung-Jun Oh, Chang-Beom Ahn, Yung-Lyul Lee, Hochong Park, Chae-Bong Sohn, and Jeongil Seo, &#8220;Residual Signal Compression Based on the Blind Signal Decomposition for Video Coding&#8221;, LNCS 4412, 2007<\/p>\n\n\n\n<p>(SCIE) Su-Yeol Jeon, Chae-Bong Sohn, Ho-Chong Park, Chang-Beom Ahn, and Seoung-Jun Oh, &#8220;Spatial Interpolation Algorithm for Consecutive Block Error Using the JND Method&#8221;, LNCS 4319, 2006<\/p>\n\n\n\n<p>(SCIE) Jun-Seong Hong, Jong-Hyun Choi, Chang-Beom Ahn, Chae-Bong Sohn, Seoung-Jun Oh, and Hochong Park, &#8220;Dual-Domain Quantization for Transform Coding of Speech and Audio Signals&#8221;, LNCS 3767, 2005<\/p>\n\n\n\n<p>(SCIE) Sang-Jun Yu, Chae-Bong Sohn, Seoung-Jun Oh, and Chang-Beom Ahn, &#8220;Multimedia: An SIMD \u2013 Based Efficient 4&#215;4 2DTransform Method&#8221;, LNCS 3480, 2005<\/p>\n<\/blockquote>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>International Conferences<\/strong><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(NIPS-2018) Aleksandra Malysheva, Tegg Taekyong Sung, Chae-Bong Sohn, Malysheva, Daniel Kudenko, Aleksei Shpilman. &#8220;Deep Multi-Agent Reinforcement Learning with Relevance Graphs.&#8221; Thirty-second Conference on Neural Information Processing Systems. <em>arXiv preprint arXiv:1811.12557<\/em>&nbsp;(2018).<\/p>\n<cite><strong>Abstract<\/strong>: Over recent years, deep reinforcement learning has shown strong successes in complex single-agent tasks, and more recently this approach has also been applied to multi-agent domains. In this paper, we propose a novel approach, called MAGnet, to multi-agent reinforcement learning (MARL) that utilizes a relevance graph representation of the environment obtained by a self-attention mechanism [17], and a message-generation technique inspired by the NerveNet architecture [18]. We applied our MAGnet approach to the Pommerman game [11] and the results show that it significantly outperforms state-of-the-art MARL solutions, including DQN, MADDPG, and MCTS.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(DTMBIO-KMH18) Tegg Taekyung Sung, Chae-Bong Sohn, Soo Jun Park, &#8220;GDMiner: Gene-Disease relation Miner system&#8221;, ACM 12th International Workshop on Data and Text Mining in Biomedical Informatics (DTMBio) October 22, 2018<\/p>\n<cite><strong>Abstract<\/strong>: The numbers of articles and journals that are published are increasing at a considerable rate, and the published information is growing continuously and fast. Because of this, researches to acquire knowledge automatically have been carried out in the areas of information retrieval, information extraction and text mining. Information retrieval approaches are good for specific topics that the number of related articles is small. But, if the number is bigger, searching skill and knowledge acquisition ability are useless. Though many efforts have been made to extract information from literature, many approaches have concentrated on specific entities, such as proteins, genes and their interactions, and much information is still remained in unstructured text. So, we have developed a system that discovers relations between various categories of biomedical entities. Our system collects abstracts from PubMed by queries representing a topic and visualizes relationship from the collection by automatic information extraction.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(IEEE ICCE-2018) HyeonSu Kim, SangBum Nam, SangGeun Choi, ChangHyung Kim, Tegg TaeKyong Sung, and Chae-Bong Sohn. &#8220;HLS-based 360 VR using spatial segmented adaptive streaming.&#8221; <em>2018 IEEE international conference on consumer electronics<\/em>. IEEE, 2018.<\/p>\n<cite><strong>Abstract<\/strong>: Recently, by advances in VR (Virtual Reality) contents and HMD (Head Mounted Display), 360VR video related research and development have been actively progressed. Also, mostly recent VR contents are provided with ultra-high definition, over 4K (UHD) and 8K (SUHD). The transmit efficiency which using the most efficient video compression, H.265, to handle such 360VR videos can be effected due to overtransmitting unseen fields in network streaming service. In this paper, a server and a network load problem can be solved by extracting and utilizing information in userconcentrated FOV (Field of View). Regarding to this concept, we propose the Spatial Segmented Adaptive Streaming (SSAS) method. By transmitting original quality video in a currently concentrated field, while transmitting degraded quality video in other fields, network load can be reduced. However, this selectively transmit method has caused switching quality delay by FOV movement. Therefore, we propose the HLS-based real-time adaptive streaming method through video fields and preencoding per quality.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>(IEEE ICCE-2012) Dae-Young Noh, Ji-Eun Kim, Chae-Bong Sohn, and Seoung-Jun Oh. &#8220;A Fast Luminance Intra 4&#215;4 Pre diction Mode Decision Method by Statistical Analysis of Residual Data in H.264\/AVC.&#8221; <em>2012 IEEE international conference on consumer electronics<\/em>. IEEE, 2012.<\/p>\n<cite><strong>Abstract<\/strong>: In H.264\/AVC, intra prediction mode decision using rate distortion optimization (RDO) improves coding efficiency but requires high computational complexity. There is a close correlation between the best mode by RDO and the energy of the residual data. In this paper we propose a fast intra 4\u00d74 block prediction mode decision method by statistical analysis of the relationship between RDO and residual data. The proposed method reduces the intra 4\u00d74 block encoding time by about 57.4%, while decreasing coding gain by about 0.29%.<\/cite><\/blockquote>\n\n\n\n<p><strong>International Journals<\/strong><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Kyu Jung Choi and Chae-Bong Sohn. &#8220;AI Referee with Mask R-CNN&#8221;, European Journal of Advances in Engineering and Technology, Vol 7, No. 2, 2020<\/p>\n<cite><strong>Abstract<\/strong>: In this paper, Object detection is a fundamental field of computer vision and has received much attention in recent years and has made great development. As the development progressed, there were many cases applied to various fields like Sports, surveillance, autonomous driving. This paper describes the algorithm of object detection and describes the papers to which it is applied. In particular, the three-second rule of basketball will be heard as an example. If the attacker or defender without a ball is in the paint zone for 3 seconds, the offense is 3 seconds.<\/cite><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Seung-Soo Jeong and Chae-Bong Sohn, &#8220;Temporal Error Concealment Algorithm Using Adaptive Multi-Side Boundary Matching Principle&#8221;,&nbsp;International Journal of Computer Science and Network Security, Vol. 8, No. 12, 2008<\/p>\n\n\n\n<p>Sang-Jun Yu and Chae-Bong Sohn, &#8220;Enhanced Transform Domain Intra Prediction for MPEG-2 to&nbsp;H.264\/AVC Transcoding&#8221;,&nbsp;International Journal of Computer Science and Network Security, Vol. 7, No. 12, 2007<\/p>\n\n\n\n<p>Chae-Bong Sohn, and Hye-Jeong Cho, &#8220;An Efficient SIMD-based Quarter-Pixel Interpolation Method for&nbsp;H.264\/AVC&#8221;,&nbsp;International Journal of Computer Science and Network Security, Vol. 6, No. 11, 2006<\/p>\n<\/blockquote>\n\n\n\n<p><strong>Domestic KCI Indexed Journals<\/strong><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\ubc15\uc5f0\uc9c0, \uc591\uadfc\uc81c, &#8220;\ud55c\uad6d\uc5b4 BERT \ubaa8\ub378\uc744 \ud65c\uc6a9\ud55c \uccad\uac01 \uc815\ubcf4 \uae30\ubc18 \uad11\uace0 \uc601\uc0c1 \ubd84\ub958 \ubc29\ubc95\ub860&#8221;, \ub514\uc9c0\ud138\ucf58\ud150\uce20\ud559\ud68c\ub17c\ubb38\uc9c0, 25.1 (2024): 121-131<\/p>\n\n\n\n<p>\ub098\uc900\uc601, \uc774\uad11\uae30, \uace0\uc740\uc218, \uae40\ub300\uc5f4, \uc190\ucc44\ubd09, &#8220;\ub9c8\uc774\ub370\uc774\ud130 \ud658\uacbd\uc5d0\uc11c \uac1c\uc778\uc758 \ubbfc\uac10 \ub370\uc774\ud130 \uc8fc\uad8c\ud655\ubcf4\ub97c \uc704\ud55c \ube44\ub300\uce6d \ud0a4 \uc554\ud638\ud654 \uae30\ubc18 \uc6d0\uaca9\uc9c4\ub8cc\uc2dc\uc2a4\ud15c&#8221;, 33.6 (2023): 485-494<\/p>\n\n\n\n<p>\uc774\ubbfc\uc544, \ubc15\uc5f0\uc9c0, \ub098\uc900\uc601, \uc190\ucc44\ubd09, &#8220;KoBERT, KoGPT-2, KoBART \ud65c\uc6a9 \ubc0f \ud558\uc774\ud37c\ud30c\ub77c\ubbf8\ud130 \ucd5c\uc801\ud654\ub97c \uc9c4\ud589\ud55c \ub9ac\ubdf0 \uac10\uc131\ubd84\uc11d \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uad6c\ud604&#8221;, \ub514\uc9c0\ud138\ucf58\ud150\uce20\ud559\ud68c\ub17c\ubb38\uc9c0, 24.11 (2023): 2831-2840<\/p>\n\n\n\n<p>\ubb38\uc885\ud604, \uc190\ucc44\ubd09, &#8220;\ub4dc\ub860 \ud658\uacbd\uc5d0\uc11c \uc2e4\uc2dc\uac04 \uac1d\uccb4 \ud0d0\uc9c0\ub97c \uc704\ud55c \ub525\ub7ec\ub2dd \ub124\ud2b8\uc6cc\ud06c \uae30\uc220 \ub3d9\ud5a5&#8221;, \uc120\uc9c4\uad6d\ubc29\uc5f0\uad6c, 6.2 (2023): 181-196<\/p>\n\n\n\n<p>\uc815\uc11c\uc601, \uc190\ucc44\ubd09, \uc720\uc815\ud638, &#8220;\ucf58\ud06c\ub9ac\ud2b8 \uade0\uc5f4 \uae4a\uc774 \ucd94\uc815\uc5d0 \uc720\uc758\ubbf8\ud55c \uc774\ubbf8\uc9c0 \ud2b9\uc131 \ubcc0\uc218\uc5d0 \uad00\ud55c \uc5f0\uad6c&#8221;, \ud55c\uad6d\ud37c\uc2e4\ub9ac\ud2f0\ub9e4\ub2c8\uc9c0\uba3c\ud2b8\ud559\ud68c\uc9c0, 16.2 (2021): 43-51<\/p>\n\n\n\n<p>\ubc15\uc5f0\uc9c0, \uc815\uc720\uc9c4, \uc190\ucc44\ubd09, &#8220;\ub525\ub7ec\ub2dd\uc744 \uc774\uc6a9\ud55c \uad70 \ub0b4\uc678 \uac70\uc218\uc790 \ud589\ub3d9 \uc778\uc2dd: \ud0a4\ud3ec\uc778\ud2b8 2D \uc2a4\ucf00\uc77c\ub9c1\uc744 \uc911\uc2ec\uc73c\ub85c&#8221;, \uc120\uc9c4\uad6d\ubc29\uc5f0\uad6c, 4.1 (2021): 43-59<\/p>\n\n\n\n<p>\ucd5c\uaddc\uc815, \uc624\uc218\uc601, \uc190\ucc44\ubd09, &#8220;\uc9c0\ub2a5\ud615 \uac10\uc2dc \uc815\ucc30 \uc2dc\uc2a4\ud15c \uad6c\ucd95\uc744 \uc704\ud55c OpenPose\uc640 Deep Learning \uae30\uc220 \uc801\uc6a9\ubc29\uc548 \uc5f0\uad6c&#8221;, \uc120\uc9c4\uad6d\ubc29\uc5f0\uad6c, 3.3 (2020): 113-132<\/p>\n\n\n\n<p>\ucd5c\uc9c4\uc194, \uae40\uacbd\ucca0, \uc190\ucc44\ubd09, &#8220;\uc0ac\uc6b4\ub4dc \ub514\uc790\uc778\uc744 \uc704\ud55c K-POP \uc74c\uc545\uc758 Wave-U-Net \ubc0f \uc8fc\ud30c\uc218 \ubd84\uc11d\uc744 \ud1b5\ud55c \uc790\ub3d9 Bass line \ud45c\uae30&#8221;, \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58, 4.3 (2019): 159-168<\/p>\n\n\n\n<p>\uc815\uc720\uc9c4, \uae40\uacbd\ucca0, \uc190\ucc44\ubd09, &#8220;Generative Adversarial Network\uc744 \uc774\uc6a9\ud55c \ud55c\ubcf5 \ub514\uc790\uc778 DiscoGAN, CycleGAN, Munit\uc744 \uc911\uc2ec\uc73c\ub85c&#8221;, \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58, 4.3 (2019): 22-29<\/p>\n\n\n\n<p>\uc815\uc131, \uc190\ucc44\ubd09. &#8220;DS3\uc640 ARIA \uc54c\uace0\ub9ac\uc998\uc744 \uc774\uc6a9\ud55c \uc778\ud130\ud398\uc774\uc2a4 \ub2e4\uc911 \uc5f0\ub3d9 \ubcf4\uc548\uc7a5\uce58\uc758 \uad6c\ud604.&#8221; \ub300\ud55c\uc804\uc790\uacf5\ud559\ud68c\ub17c\ubb38\uc9c0 55.8 (2018): 127-133<\/p>\n\n\n\n<p>\uc870\uc218\uc601, \uae40\ub300\uc5f4, \uae40\ubb38\uc11d, \uc190\ucc44\ubd09. &#8220;\uc5bc\uad74 \uc778\uc2dd \ub370\uc774\ud130 \uc138\ud2b8 \uc0dd\uc131\uc5d0 \uad00\ud55c \uc5f0\uad6c.&#8221; \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58 3.1 (2018): 85-93<\/p>\n\n\n\n<p>\ucd5c\uc9c4\uc194, \ucd5c\uc0c1\uadfc, \uae40\ubb38\uc11d, \uc190\ucc44\ubd09. &#8220;CNN\uacfc&nbsp; OpenPose \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \ud65c\uc6a9\ud55c \uc2e4\uc2dc\uac04 \uc218\ud654 \ud1b5\uc5ed\uae30.&#8221; \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58 3.1 (2018): 94-101<\/p>\n\n\n\n<p>\uc870\uc218\uc601, \uc190\ucc44\ubd09, \uae40\ubb38\uc11d. &#8220;Generative adversarial nets\ub97c \uc774\uc6a9\ud55c \ube48\uc13c\ud2b8 \ubc18 \uace0\ud750 \uc774\ubbf8\uc9c0 \uc0dd\uc131 \uc2dc\uc2a4\ud15c.&#8221; \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58 2.3 (2017): 85-92<\/p>\n\n\n\n<p>\uae40\ub300\uc5f4, \uc190\ucc44\ubd09, \uae40\ubb38\uc11d. &#8220;\uac1c\uc778 \ub9de\ucda4\ud615 \uad11\uace0 \uc81c\uc791 \ubc0f \uc1a1\ucd9c\uc5d0 \uad00\ud55c \uc5f0\uad6c.&#8221; \ud55c\uad6d\ub514\uc790\uc778\ub9ac\uc11c\uce58 2.3 (2017): 18-25<\/p>\n\n\n\n<p>Dae Yeol Kim, Soo Young Cho, Chan Hyeong Park, Chae-Bong Sohn. &#8220;Action Game with Automatic Background Music Generation Using Genetic Algorithm.&#8221; Korean Society For Computer Game 29.2 (2016): 99-106<\/p>\n\n\n\n<p>\uc774\uae30\uc6c5, \uc190\ucc44\ubd09. &#8220;\ucef4\ud4e8\ud130 \uac8c\uc784\uc744 \uc704\ud55c \uc74c\uc545 \uae30\ud638\uc758 \ubcc0\ud654\uc5d0 \uac15\uc778\ud55c \uc545\ubcf4\uc778\uc2dd \uc2dc\uc2a4\ud15c.&#8221; \ud55c\uad6d\ucef4\ud4e8\ud130\uac8c\uc784\ud559\ud68c\ub17c\ubb38\uc9c0 28.4 (2015): 17-26<\/p>\n\n\n\n<p>\uae40\ub0a8\ud6c8, \uc815\ud615\uc6d0, \uc190\ucc44\ubd09, \uc190\uad11\ucca0. &#8220;\uc628\ub77c\uc778 \uac8c\uc784 \uc11c\ube44\uc2a4 \uc601\uc18d\uc131\uc744 \uc704\ud55c \ub2e4\uc911 \uc5f0\uacb0 \uc2dc\uc2a4\ud15c.&#8221; \ud55c\uad6d\ucef4\ud4e8\ud130\uac8c\uc784\ud559\ud68c\ub17c\ubb38\uc9c0 27.3 (2014): 17-26<\/p>\n\n\n\n<p>\uc804\uc131\ud558, \uc804\ud604\ubb34, \uc2e0\uc131\uad00, \uc190\ucc44\ubd09, \uc591\ud6c8\uae30. &#8220;\uc774\ub3d9 \ubb3c\uccb4\uc758 \ud0dc\uae45\uc744 \uc704\ud55c \ub514\uc9c0\ud138 \ube54\ud3ec\ubc0d \uae30\ubc18 RFID \uc2dc\uc2a4\ud15c.&#8221; \ud55c\uad6d\uc815\ubcf4\ud1b5\uc2e0\ud559\ud68c\ub17c\ubb38\uc9c0 18.7 (2014): 1713-1720<\/p>\n\n\n\n<p>\uae40\uc131\uc77c, \uc190\ucc44\ubd09. &#8220;ISDB-T \uc2dc\uc2a4\ud15c\uc744 \uc704\ud55c SNR \ucd94\uc815\uae30 \uad6c\ud604.&#8221; \ubc29\uc1a1\uacf5\ud559\ud68c\ub17c\ubb38\uc9c0 18.6 (2013): 927-934<\/p>\n\n\n\n<p>\uc190\ucc44\ubd09, \uc190\uad11\ucca0, \uc815\ud615\uc6d0. &#8220;\uc99d\uac15\ud604\uc2e4 \uae30\ubc18 \ubb38\ud654\uc7ac \ud559\uc2b5 \uac8c\uc784 \ud504\ub85c\ud1a0\ud0c0\uc785 \uc124\uacc4.&#8221; \ud55c\uad6d\ucef4\ud4e8\ud130\uac8c\uc784\ud559\ud68c\ub17c\ubb38\uc9c0 26.3 (2013): 119-124<\/p>\n\n\n\n<p>\uc190\ucc44\ubd09, \ubc15\uc218\uc900, \uc624\uc2b9\uc900, \uc548\ucc3d\ubc94, \ubc15\ud638\uc815, \uc2ec\ub3d9\uaddc. &#8220;u-\ud53c\ud2b8\ub2c8\uc2a4 \uc2dc\uc2a4\ud15c \uae30\uc220.&#8221; \ud55c\uad6d\ud1b5\uc2e0\ud559\ud68c\uc9c0 (2009): 14-18<\/p>\n\n\n\n<p>\uc870\ud61c\uc815, \uae40\uc9c0\uc740, \uc190\ucc44\ubd09, \uc815\uad11\uc218, \uc624\uc2b9\uc900. &#8220;\ud1b5\uacc4\uc801 \ubd84\uc11d \uae30\ubc18 \ubd88\ubc95 \ubcf5\uc81c \ube44\ub514\uc624 \uc601\uc0c1 \uac10\uc2dd \ubc29\ubc95.&#8221; \ubc29\uc1a1\uacf5\ud559\ud68c\ub17c\ubb38\uc9c0 14.6 (2009): 661-675<\/p>\n<\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>SCI, SCI-E, SCOPUS Indexed Journals (SCI) Donghyeon Kim, Kyoung Ryul Lee, Dong Seok Lim, Kwang Hyun Lee, Jong Seon Lee,Dae-Yeol Kim* &amp; Chae-Bong Sohn* &#8220;A novel hybrid CNN-transformer model for arrhythmia detection without R-peak identification using stockwell transform&#8220;, Nature Scientific Reports 15.7817 (2025): 1-11. (IF 3.8 Q1) Abstract:This study presents a novel hybrid deep learning [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"open","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-137","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\" \/>\n<meta property=\"og:description\" content=\"SCI, SCI-E, SCOPUS Indexed Journals (SCI) Donghyeon Kim, Kyoung Ryul Lee, Dong Seok Lim, Kwang Hyun Lee, Jong Seon Lee,Dae-Yeol Kim* &amp; Chae-Bong Sohn* &#8220;A novel hybrid CNN-transformer model for arrhythmia detection without R-peak identification using stockwell transform&#8220;, Nature Scientific Reports 15.7817 (2025): 1-11. (IF 3.8 Q1) Abstract:This study presents a novel hybrid deep learning [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&amp;lang=en\" \/>\n<meta property=\"og:site_name\" content=\"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-01T04:28:03+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data1\" content=\"1\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=137&lang=en\",\"url\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=137&lang=en\",\"name\":\"Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#website\"},\"datePublished\":\"2015-01-25T11:42:47+00:00\",\"dateModified\":\"2025-07-01T04:28:03+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=137&lang=en#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=137&lang=en\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=137&lang=en#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?page_id=2&lang=en\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Papers\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/\",\"name\":\"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\",\"description\":\"Department of Electronics and Communications Engineering, Kwangwoon University (\uad11\uc6b4\ub300\ud559\uad50 \uc804\uc790\ud1b5\uc2e0\uacf5\ud559\uacfc)\",\"publisher\":{\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#organization\",\"name\":\"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\",\"url\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/cropped-kwseal.png\",\"contentUrl\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/wp-content\\\/uploads\\\/2019\\\/07\\\/cropped-kwseal.png\",\"width\":512,\"height\":512,\"caption\":\"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)\"},\"image\":{\"@id\":\"https:\\\/\\\/ssl.kw.ac.kr\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en","og_locale":"ko_KR","og_type":"article","og_title":"Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","og_description":"SCI, SCI-E, SCOPUS Indexed Journals (SCI) Donghyeon Kim, Kyoung Ryul Lee, Dong Seok Lim, Kwang Hyun Lee, Jong Seon Lee,Dae-Yeol Kim* &amp; Chae-Bong Sohn* &#8220;A novel hybrid CNN-transformer model for arrhythmia detection without R-peak identification using stockwell transform&#8220;, Nature Scientific Reports 15.7817 (2025): 1-11. (IF 3.8 Q1) Abstract:This study presents a novel hybrid deep learning [&hellip;]","og_url":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en","og_site_name":"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","article_modified_time":"2025-07-01T04:28:03+00:00","twitter_card":"summary_large_image","twitter_misc":{"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"1\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en","url":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en","name":"Papers - Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","isPartOf":{"@id":"https:\/\/ssl.kw.ac.kr\/blog\/#website"},"datePublished":"2015-01-25T11:42:47+00:00","dateModified":"2025-07-01T04:28:03+00:00","breadcrumb":{"@id":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=137&lang=en#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ssl.kw.ac.kr\/blog\/?page_id=2&lang=en"},{"@type":"ListItem","position":2,"name":"Papers"}]},{"@type":"WebSite","@id":"https:\/\/ssl.kw.ac.kr\/blog\/#website","url":"https:\/\/ssl.kw.ac.kr\/blog\/","name":"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","description":"Department of Electronics and Communications Engineering, Kwangwoon University (\uad11\uc6b4\ub300\ud559\uad50 \uc804\uc790\ud1b5\uc2e0\uacf5\ud559\uacfc)","publisher":{"@id":"https:\/\/ssl.kw.ac.kr\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ssl.kw.ac.kr\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/ssl.kw.ac.kr\/blog\/#organization","name":"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)","url":"https:\/\/ssl.kw.ac.kr\/blog\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/ssl.kw.ac.kr\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/ssl.kw.ac.kr\/blog\/wp-content\/uploads\/2019\/07\/cropped-kwseal.png","contentUrl":"https:\/\/ssl.kw.ac.kr\/blog\/wp-content\/uploads\/2019\/07\/cropped-kwseal.png","width":512,"height":512,"caption":"Smart System Laboratory (\uc2a4\ub9c8\ud2b8\uc2dc\uc2a4\ud15c \uc5f0\uad6c\uc2e4)"},"image":{"@id":"https:\/\/ssl.kw.ac.kr\/blog\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/pages\/137","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=137"}],"version-history":[{"count":86,"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/pages\/137\/revisions"}],"predecessor-version":[{"id":740,"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=\/wp\/v2\/pages\/137\/revisions\/740"}],"wp:attachment":[{"href":"https:\/\/ssl.kw.ac.kr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=137"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}