コンテンツメニュー
published_at 2019
Creators : 堀 健志 Updated At : 2019-06-10 19:31:45
published_at 2019
Creators : 西田 隆司 Updated At : 2019-04-23 20:01:44
published_at 2019
Creators : 伊賀瀨 雅也 Updated At : 2019-04-23 20:01:45
published_at 2019
Creators : 市川 宏司 Updated At : 2019-04-23 20:01:44
published_at 2019
Creators : 松木 宏彰 Updated At : 2019-04-23 20:01:44
published_at 2019
Creators : Nguyen Trung Kien Updated At : 2019-04-23 20:01:43
Hyperspectral (HS) imaging can capture the detailed spectral signature of each spatial location of a scene and leads to better understanding of different material characteristics than traditional imaging systems. However, existing HS sensors can only provide low spatial resolution images at a video rate in practice. Thus reconstructing high-resolution HS (HR-HS) image via fusing a low-resolution HS (LR-HS) image and a high-resolution RGB (HR-RGB) image with image processing and machine learning technique, called as hyperspectral image super resolution (HSI SR), has attracted a lot of attention. Existing methods for HSI SR are mainly categorized into two research directions: mathematical model based method and deep learning based method. Mathematical model based methods generally formulate the degradation procedure of the observed LR-HS and HR-RGB images with a mathematical model and employ an optimization strategy for solving. Due to the ill-posed essence of the fusion problem, most works leverage the hand-crafted prior to model the underlying structure of the latent HR-HS image, and pursue a more robust solution of the HR-HS image. Recently, deep learning-based approaches have evolved for HS image reconstruction, and current efforts mainly concentrated on designing more complicated and deeper network architectures to pursue better performance. Although impressive reconstruction results can be achieved compared with the mathematical model based methods, the existing deep learning methods have the following three limitations. 1) They are usually implemented in a fully supervised manner, and require a large-scale external dataset including the degraded observations: the LR-HS/HR-RGB images and their corresponding HR-HS ground-truth image, which are difficult to be collected especially in the HSI SR task. 2) They aim to learn a common model from training triplets, and are undoubtedly insufficient to model abundant image priors for various HR-HS images with rich contents, where the spatial structures and spectral characteristics have considerable difference. 3) They generally assume that the spatial and spectral degradation procedures for capturing the LR-HS and HR-RGB images are fixed and known, and then synthesize the training triplets to learn the reconstruction model, which would produce very poor recovering performance for the observations with different degradation procedures. To overcome the above limitations, our research focuses on proposing the unsupervised learning-based framework for HSI SR to learn the specific prior of an under-studying scene without any external dataset. To deal with the observed images captured under different degradation procedures, we further automatically learn the spatial blurring kernel and the camera spectral response function (CSF) related to the specific observations, and incorporate them with the above unsupervised framework to build a high-generalized blind unsupervised HSI SR paradigm. Moreover, Motivated by the fact that the cross-scale pattern recurrence in the natural images may frequently exist, we synthesize the pseudo training triplets from the degraded versions of the LR-HS and HR-RGB observations and themself, and conduct supervised and unsupervised internal learning to obtain a specific model for the HSI SR, dubbed as generalized internal learning. Overall, the main contributions of this dissertation are three-fold and summarized as follows: 1. A deep unsupervised fusion-learning framework for HSI SR is proposed. Inspired by the insights that the convolution neural networks themself possess large amounts of image low-level statistics (priors) and can more easy to generate the image with regular spatial structure and spectral pattern than noisy data, this study proposes an unsupervised framework to automatically generating the target HS image with the LR-HS and HR-RGB observations only without any external training database. Specifically, we explore two paradigms for the HS image generation: 1) learn the HR-HS target using a randomly sampled noise as the input of the generative network from data generation view; 2) reconstructing the target using the fused context of the LR-HS and HR-RGB observations as the input of the generative network from a self-supervised learning view. Both paradigms can automatically model the specific priors of the under-studying scene by optimizing the parameters of the generative network instead of the raw HR-HS target. Concretely, we employ an encoder-decoder architecture to configure our generative network, and generate the target HR-HS image from the noise or the fused context input. We assume that the spatial and spectral degradation procedures for the under-studying LR-HS and HR-RGB observation are known, and then can produce the approximated version of the observations by degrading the generated HR-HS image, which can intuitively used to obtain the reconstruction errors of the observation as the loss function for network training. Our unsupervised learning framework can not only model the specific prior of the under-studying scene to reconstruct a plausible HR-HS estimation without any external dataset but also be easy to be adapted to the observations captured under various imaging conditions, which can be naively realized by changing the degradation operations in our framework. 2. A novel blind learning method for unsupervised HSI SR is proposed. As described in the above deep unsupervised framework for HSI SR that the spatial and spectral degradation procedures are required to be known. However, different optical designs of the HS imaging devices and the RGB camera would cause various degradation processes such as the spatial blurring kernels for capturing LRHS images and the camera spectral response functions (CSF) in the RGB sensors, and it is difficult to get the detailed knowledge for general users. Moreover, the concrete computation in the degradation procedures would be further distorted under various imaging conditions. Then, in real applications, it is hard to have the known degradation knowledge for each under-studying scene. To handle the above issue, this study exploits a novel parallel blind unsupervised approach by automatically and jointly learning the degradation parameters and the generative network. Specifically, according to the unknown components, we propose three ways to solve different problems: 1) a spatial-blind method to automatically learn the spatial blurring kernel in the capture of the LR-HS observation with the known CSF of the RGB sensor; 2) a spectral-blind method to automatically learn the CSF transformation matrix in the capture of the HR-RGB observation with known burring kernel in the HS imaging device; 3) a complete-blind method to simultaneously learn both spatial blurring kernel and CSF matrix. Based on our previously proposed unsupervised framework, we particularly design the special convolution layers for parallelly realizing the spatial and spectral degradation procedures, where the layer parameters are treated as the weights of the blurring kernel and the CSF matrix for being automatically learned. The spatial degradation procedure is implemented by a depthwise convolution layer, where the kernels for different spectral channel are imposed as the same and the stride parameter is set as the expanding scale factor, while the spectral degradation procedure is achieved with a pointwise convolution layer with the output channel 3 to produce the approximated HR-RGB image. With the learnable implementation of the degradation procedure, we construct an end-toend framework to jointly learn the specific prior of the target HR-HS images and the degradation knowledge, and build a high-generalized HSI SR system. Moreover, the proposed framework can be unified for realizing different versions of blind HSI SR by fixing the parameters of the implemented convolution as the known blurring kernel or the CSF, and is highly adapted to arbitrary observation for HSI SR. 3. A generalized internal learning method for HSI SR is proposed. Motivated by the fact that natural images have strong internal data repetition and the crossscale internal recurrence, we further synthesize labeled training triplets using the LR-HS and HR-RGB observation only, and incorporate them with the un-labeled observation as the training data to conduct both supervised and unsupervised learning for constructing a more robust image-specific CNN model of the under-studying HR-HS data. Specifically, we downsample the observed LR-HS and HR-RGB image to their son versions, and produce the training triplets with the LR-HS/HR-RGB sons and the LR-HS observation, where the relation among them would be same as among the LR-HS/HR-RGB observations and the HR-HS target despite of the difference in resolutions. With the synthesized training samples, it is possible to train a image-specific CNN model to achieve the HR-HS target with the observation as input, dubbed as internal learning. However, the synthesized labeled training samples usually have small amounts especially for a large spatial expanding factor, and the further down-sampling on the LR-HS observation would bring severe spectral mixing of the surrounding pixels causing the deviation of the spectral mixing levels at the training phase and test phase. Therefore, these limitations possibly degrade the super-resolved performance with the naive internal learning. To mitigate the above limitations, we incorporate the naive internal learning with our selfsupervised learning method for unsupervised HSI SR, and present a generalized internal learning method to achieve more robust HR-HS image reconstruction.
Creators : LIU ZHE Updated At : 2023-12-12 17:03:54
Creators : 後藤 奈穂 Updated At : 2023-12-12 16:38:51
【背景】結腸直腸癌(colorectal cancer:CRC)の予後については、腫瘍の特性だけでなく、宿主の免疫反応も重要な因子となる。我々は宿主の免疫反応として全身および腫瘍微小環境(tumor microenvironment: TME)の炎症性サイトカイン発現に注目し、これらを評価することにより、免疫抑制状態と患者の予後との関係を検討した。 【方法】 切除可能CRC患者209名において、術前に採取した血清サンプルを用いてサイトカイン濃度(IL-1β、IL-6、IL-8、TNF-α)を電気化学発光法により測定し、予後との関連を検討した。また切除切片における腫瘍組織でのサイトカイン発現を腫瘍細胞、間質細胞に分けて免疫組織化学的に評価した。さらに、切除したCRC患者10例において、新鮮な切除切片から抽出した腫瘍浸潤細胞を用いたマスサイトメトリーによるシングルセル解析を行った。 【結果】 無再発生存期間において、血清IL-1β、IL-8、TNF-α濃度の高低では有意な関係を認めなかったが、血清IL-6高値群で有意に予後不良であった。また血中IL-6濃度上昇は腫瘍組織中の間質細胞におけるIL-6高発現と関連していた。シングルセル解析の結果、腫瘍浸潤免疫細胞のうちIL-6+細胞は主に骨髄球系細胞で構成され、リンパ球系細胞ではIL-6発現をほとんど認めなかった。またIL-6高発現群では、CD33+HLADR-骨髄由来抑制細胞(myeloid-derived suppressor cell: MDSC)およびCD4+FOXP3highCD45RA-エフェクター型抑制性T細胞(effector regulatory T cell: eTregの割合がIL-6低発現群に比べ有意に高かった。さらに、MDSCにおけるIL-10+細胞の割合、eTregにおけるIL-10+細胞またはCTLA-4+細胞の割合は、IL-6高発現群で有意に高かった。 【結論】血清IL-6濃度の上昇は間質細胞のIL-6発現と関連し、予後不良であった。腫瘍浸潤免疫細胞におけるIL-6高発現は、TMEにおけるMDSCやeTreg等の免疫抑制性細胞の蓄積と関連し、その機能性マーカーの上昇も認めた。これらIL-6を介した抑制性免疫機構がCRC患者の予後不良の一因となっている可能性がある。
Creators : 山本 常則 Updated At : 2023-12-12 16:05:09
【背景】インターロイキン(IL)-33は, 宿主防御, 神経損傷, 炎症などに重要なIL-33/ST2シグナル経路を誘導する. 一方, IL-33のデコイ受容体である可溶性ST2(sST2)は, IL-33/ST2シグナル経路を抑制する. sST2は種々の神経疾患患者の血清中で増加するが, 低酸素性虚血性脳症(Hypoxic-ischemic encephalopathy; HIE)では知られていない. 【目的】本研究の目的は, HIEにおける血清中のIL-33, 及びsST2濃度を測定し, HIE重症度と神経学的予後との関連性を検討することである. 【対象と方法】2017年1月から2022年4月の期間に, 山口大学医学部附属病院総合周産期母子医療センターに入院した, 在胎期間36週以上, かつ出生体重1,800g以上の新生児を対象とし, HIE群23名, 対照群16名を本研究に登録した. HIEの重症度はSarnat分類により軽症, 中等症, 重症に分類し, 生後6時間以内, 及び1-2, 3, 7日目の血清IL-33及びsST2濃度を測定した. プロトン磁気共鳴スペクトロスコピーによりHIE群の基底核におけるlactate/N-acetylaspartate(Lac/NAA)比を算出し, 退院後の神経学的後遺症の有無を追跡調査した. 【結果】血清中IL-33濃度は各群で差を認めなかった. 一方, 中等症及び重症HIE群の血清sST2濃度は, 対照群に比し著明に高値で, HIE重症度と相関して高値であった. 血清中sST2濃度はLac/NAA比と有意な正の相関を示し(相関係数=0.527, P=0.024), 神経学的後遺症を来したHIE児では予後良好の児に比し, sST2濃度及びLac/NAA比が有意に高かった(それぞれP=0.020, <0.001).【結論】血清sST2濃度はHIEの重症度および神経学的予後予測に有用である可能性が示唆された.
Creators : 濱野 弘樹 Updated At : 2023-12-12 15:48:42
Open source software (OSS) are adopted as embedded systems, server usage, and so on because of quick delivery, cost reduction, and standardization of systems. Therefore, OSS is often used not only for the personal use but also for the commercial use. Many OSS have been developed under the peculiar development style known as the bazaar method. According to this method, many faults are detected and fixed by developers around the world, and the fixed result will be reflected in the next release. Also, many OSS are developed and managed by using the fault big data recorded on the bug tracking systems. Then, many OSS are developed and maintained by several developers with many OSS users. According to the results of the 2022 Open Source Security and Risk Analysis (OSSRA), OSS is an essential part of proprietary software, e.g., the source code containing OSS is 97%, all source code using OSS is 78%. On the other hand, OSS has issues from various perspectives. Therefore, OSS users need to decide on whether they should use OSS with consideration of each issue. In addition, the managers of open source projects need to manage their projects appropriately because OSS has a large impact on software around the world. This thesis focuses on the following three issues among many ones. We examine a method for OSS users and open source project managers to evaluate the stability of open source projects. 1. Selection evaluation and licensing: Methods for OSS users to make selections from the many OSS available situation, 2. Vulnerability support: Predicted fault fix priority for the reported OSS, 3. Maintenance and quality assurance: Prediction of appropriate OSS version upgrade timing, considering the development effort required after OSS upgrade by OSS users. In “1. Selection evaluation and licensing,” we attempt to derive the OSS-oriented EVM by applying the earned value management (EVM) to several open source projects. The EVM is one of the project management methodologies for measuring the project performance and progress. In order to derive the OSS-oriented EVM, we apply the stochastic models based on software reliability growth model (SRGM) considering the uncertainty for the development environment in open source projects. We also improve the method of deriving effort in open source projects. In case of applying the existing method of deriving effort in open source projects, it is not possible to derive some indices in the OSS-oriented EVM. Thus, we resolve this issue. The derived OSSoriented EVM helps OSS users and open source project managers to evaluate the stability of their current projects. It is an important to use the decision-making tool regarding their decisions and projects of OSS. From a different perspective, we also evaluate the stability of the project in terms of the speed of fault fixing by predicting the time transition of fixing the OSS faults reported in the future. 2. In “Vulnerability support”, in terms of open source project managers, we create metrics to detect faults with a high fix priority and predicted a long time for fixing. In addition, we try to improve the detection accuracy of the proposed metrics by learning not only the specific version but also the bug report data of the past version by using the random forest considering the characteristic similarities of bugs fix among different versions. This allows the project managers to identify the faults that should be prioritized for fixing when a large number of faults are reported, and facilitates project operations. In “3. Maintenance and quality assurance”, as an optimum maintenance problem, we predict the appropriate OSS version-up timing considering the maintenance effort required by OSS users after upgrading the OSS. It is dangerous in terms of the vulnerability to continue using the specified version of OSS ignoring the End of Life. Therefore, we should upgrade the version periodically. However, the maintenance cost increase with the version upgrade frequently. Then, we find the optimum maintenance time by minimizing the total expected software maintenance effort in terms of OSS users. In particular, we attempt to reflect the progress of open source projects by using the OSS-oriented EVM in deriving the optimal maintenance time. In conclusion, we found that there is the applicability as the stability evaluation of open source projects from three perspectives. Particularly, the OSS-oriented EVM discussed in “1. Selection evaluation and licensing” can contribute to the visualization of maintenance effort in open source projects. The proposed method will potentially contribute to the development of OSS in the future.
Creators : Sone Hironobu Updated At : 2023-12-12 17:20:31
Japan's declining birthrate and aging population are difficult to resolve in the short term, and the working-age population, which refers to the population aged 15 to 65, continues to decline. On the other hand, mental illness patients are increasing in the working age population. The medical side, which is responsible for treatment, is required to improve efficiency by reforming the way doctors work. Therefore, expectations are placed on psychotherapy performed at home for the purpose of effective treatment of the working-age population. In this study, neurofeedback, which is one of the psychological therapies, was taken up, and applied equipment development and verification were carried out for its implementation at home. In this study, as a preclinical stage, measures were verified based on measurements of general cooperators. Neurofeedback (NFB) is considered to be one of psychotherapy using electroencephalogram signals, and is a psychotherapy that visualizes one's own electroencephalogram and self-controls the visualized electroencephalogram. It is attracting attention because it is a non-drug therapy and provides neuromodulation. NFB is being investigated for many clinical applications. The target diseases are diverse, including chronic pain, ADHD, depression, and mood disorders. However, we believe that there are four tasks to ensure the effectiveness of this therapy. Task 1 is overcoming the difficulty of installing electroencephalogram electrodes. NFB is considered to be a therapy that affects the plasticity of the cranial nerves, and is a therapy that actively promotes the development of neural networks, and is expected to be highly effective if the training frequency is high. It is necessary to be able to perform it at home, and it is required that electroencephalogram electrodes can be easily installed. Therefore, we made a prototype of an electroencephalogram headset with bipolar gel electrodes, and as a result of trial verification with children, we were able to confirm the electroencephalogram signals of 30 people aged 5 to 20 years old. Analysis of the recorded electroencephalogram revealed an age-dependent left-brain tendency in β waves, etc., confirming consistency with previous findings. Task 2 is determining the brain wave derivation part of the NFB training target. For electroencephalography, lead electrodes are usually placed in the scalp, but it is difficult to place the electrodes in the scalp by yourself. There is a need to consider forehead derivation for easy EEG electrode placement at home. There are regional differences in EEG waveforms within the forehead, and it is necessary to select the most appropriate extraction site. For NFB, we explored the optimal site based on the correlation with the top of the head, which is usually the electroencephalogram derivation position. Next, we performed an EEG network analysis at the time of NFB using the EEG derived from the top of the head and the EEG derived from the optimal forehead region, and analyzed the difference in the brain network during NFB due to the difference in the derivation region. For the second task, we explored the optimal site for deriving brain waves from the forehead, and proved that NFB from the brain waves derived from this site works on the same network as NFB from the brain waves derived from the top of the head. Task 3 is a method of selecting an electroencephalogram frequency band to be derived and self-controlled in NFB therapy (training target electroencephalogram frequency band). In previous studies, the EEG frequencies targeted for NFB therapy are diverse and not standardized. Even for the same disease, various electroencephalogram frequency bands are selected and NFB is performed. A personalized frequency band decision is made according to the patient's pathology and condition. In order to make the frequency band determination method more logical, we thought it necessary to determine the electroencephalogram frequency for therapy from the comparison of the basic rhythms of healthy subjects and patients. In this study, we created an electroencephalogram basic rhythm evaluation program and collected electroencephalogram basic rhythm data from randomly selected subjects. The electroencephalogram basic rhythm evaluation program consists of 7 stages. Eyes open stage, Eyes closed stage, 0Back stage, Rest1 stage, 2Back stage, Rest2 stage, Healing Picture stage. Changes in brain waves occur due to external stimuli such as eye opening and eye closing, concentration, and relaxation. An electroencephalogram basic rhythm evaluation program was created considering multiple stimuli that affect electroencephalogram dynamics. The usefulness of this program was confirmed as a preliminary examination of the dominant fluctuation region and network analysis by topographic analysis during the execution of the electroencephalogram basic rhythm evaluation program. EEG basic rhythm brain standard program electroencephalogram measurements were carried out for 89 subjects recruited from the general public, and a database was created. Using the forehead optimally measured parts (left and right) obtained in Task 2 as electroencephalogram derivation parts, a significant difference test was performed for each electroencephalogram frequency band Power value and content rate of each stage of the electroencephalogram basic rhythm evaluation program. The α Power value increased 2.52 times when the eyes were closed, and the θ Power value increased 1.67 times during 2 Back compared to 0 Back. We examined the possibility of clinical application by analyzing the correlation between the score of the questionnaire used in clinical diagnosis and the electroencephalogram component. The questionnaires used were mainly CSI (CENTRAL SENSITIZATION INVENTORY), and POMS2 (Profile of Mood States 2). Task 4 is NFB scoring. Continuation of psychotherapy requires a score as a reward to be visualized. We compared the two scores, the time ratio score and the amplitude ratio score, analyzed the correlation between the questionnaire used in task 3 and the two scores, and examined the optimal score. In results, the frequency band that correlates better with psychological activity during NFB was suggested SMR. Some of the psychological scales included the data probably above the general average level, which might have provided hypotheses at the preclinical stage. 4 tasks were conducted to demonstrate the technical requirements and effectiveness evaluation for the practical application of cognitive psychological training NFB, which is expected to be used with high frequency at home for children to working-age adults. The technological requirements and effectiveness evaluation for the practical application of NFB are presented. This research attempted four tasks and realized the possibility of frequent NFB training at home for patients from children to productive age. As a preclinical stage, it was a study within a range that can be resolved as a stage of policy verification based on the general participant study. In the future, the efficacy of this study will be further evaluated by comparing it with clinical data in the area of mental illness such as depression and developmental disorders, including chronic pain. In the future, the effectiveness of this study will be further evaluated by comparison with clinical data in the area of chronic pain and other mental illnesses such as depression and developmental disorders.
Creators : Oda Kazuyuki Updated At : 2023-12-13 09:39:52
Since social infrastructure, which was intensively developed during the high economic growth period,will deteriorate all at once in the future, maintenance and management of facilities will be an issue in thefuture. Currently, facility inspection records are based on paperbased forms, and are not premised onautomatic processing by computer. The authors have developed the “Smart Chosa” and realized a databaseof facility inspections and a GIS System. The Smart Chosa was able to record the location of the inspectionphoto on a two dimensional map, but because it was necessary to approach the deformed part when takingthe inspection photo, it was not possible to grasp the position, direction, and size of the entire facility.Therefore, we applied 3D GIS to Smart Chosa for sabo dams, created a 3D model from photographs takenon site, and conducted research to manage inspection results on the 3D model. This study summarizes the results of research on management of inspection photographs on a 3D GISin order to improve the efficiency of management of inspection photographs of sabo facilities. This thesisconsists of 6 chapters, and the main content of each chapter is as follow. [Chapter 1: Introduction] In this chapter, the current status and issues of the maintenance and management of socialinfrastructure in Japan were summarized. Utilization of 3D models for maintenance and management ofcivil engineering facilities, high-precision positioning used for alignment of 3D models, efficientinspection of concrete structures, and existing research trends on iPhone LiDAR applications wereorganized. On that basis, the purpose and points of focus of this research were organized, and the structureand outline of this paper were described. [Chapter 2: Comparison of 3D models and examination of application to sabo facilities maintenance management system] In this chapter, three types of models, a BIM/CIM model, a 3D point cloud model, and a 3D surfacemodel, are compared and examined as 3D models to be applied to the maintenance management system.The problem setting in this chapter is the selection of a 3D model to be used in this system. The constraintis that the Sabo dams including the existing dam can be modeled in 3D. At present, there are few BIM/CIMmodels for sabo dams, so when applying to existing sabo dams, we believe that a 3D surface model thatcan be created by SfM/MVS technology from photographs taken by UAVs will be useful. The 3D modelin the research was a 3D surface model. Knowledge about the 3D model that can be applied to the sabo facility maintenance managementsystem and knowledge for utilizing the 3D model in the 3D GIS were obtained. [Chapter 3: Performance evaluation of RTK receiver used for Sabo facility investigation support system] In this chapter, a survey of high-precision positioning technology necessary for positioning 3D modelsof sabo dams and inspection photos on a 3D GIS, and evaluation of positioning performance in sabo damsand surrounding forests are conducted. The problem setting in this chapter is whether or not locationinformation can be acquired during surveys of Sabo facilities, and accuracy verification. The constraintis that real-time high-precision positioning is required using inexpensive and small devices inenvironments that are unfavorable for satellite positioning (such as sabo dams and forests). It was confirmed that the multi-band receiver whose performance was evaluated has a horizontalvariation of 22 mm (2DRMS) even in a poor environment where about 70% of the sky directly below thesabo dam is covered. It was confirmed that the method can be applied to aligning 3D models ofphotographs. [Chapter 4: Investigation of image synthesis for creation of sabo dams inspection image] In this chapter, as a basic examination of 3D model creation, the image synthesis method is organized.The problem setting in this chapter is normalization and image combination necessary for synthesizinginspection photographs (2D). The constraint is that the inspection photography equipment is a smartphonefor field survey. In the feature point detection method for image synthesis, we compared two types offeature amounts, SIFT feature amount and AKAZE feature amount, and confirmed the accuracy byexperiments. In addition, RANSAC was used as an outlier removal method. By combining these methods,we performed image synthesis using multiple photographs of the concrete surface of the sabo dam. [Chapter 5: 3D model creation by SfM/MVS and application to 3D GIS] The problem setting in this chapter is a superimposed display of a 3D Sabo dam model and inspectionphotographs. The constraint is that the equipment that can be used to create a 3D model of inspectionphotographs is limited to equipment (compact and lightweight) that can be brought by local workers. Inthis chapter, we first present an overview of the "smart chosa" system that expands the scope of applicationfrom 2D to 3D in this research. After that, we investigated SfM/MVS processing to create a 3D surfacemodel. By creating a 3D model of the sabo dam and a 3D model of inspection photos by SfM/MVSprocessing, and importing them into a 3D GIS, we succeeded in superimposing the sabo dam andinspection photos on a 3D map. In addition, we examined a method of creating a 3D surface model using the iPhone LiDARapplication that can perform 3D measurement using the LiDAR function installed from iPhone12Pro. Wecompared the 3D model created with the iPhone LiDAR app and the 3D model created using MetaShape,a software that implements SfM/MVS processing, and confirmed the image resolution and positionalaccuracy for use as inspection photographs. In order to incorporate the created 3D model into 3D GISsoftware, we examined a method for matching the orientation and position, and actually superimposedthe 3D model of the sabo dam and the 3D model of the inspection photograph on the 3D GIS. I haveconfirmed that it is possible. [Chapter 6: Summary] In this chapter, a summary of the results obtained in Chapters 2 to 5 and future issues were discussed. The result of this research is a visualization method that makes it easy for people other than fieldinvestigators to understand the situation of the site by importing 3D surface models acquired by variousmethods into 3D GIS. 3D surface models include SfM models created from photos taken with aUAV/smartphone, SfM models created from photos taken with a handheld RTK rover, and 3D modelscreated with the iPhone LiDAR app. By using this method, it is possible to grasp the deformation position and deformation direction of thesabo dam in 3D space, and by superimposing the photographs of each inspection, it is possible to graspthe change over time.
Creators : 山野 亨 Updated At : 2023-12-13 09:50:26