- 資源タイプ一覧
- doctoral thesis
doctoral thesis
Id (<span class="translation_missing" title="translation missing: en.view.desc">Desc</span>)
Creators :
DIDIK PRAMONO
Creators :
BAYANZUL ARGAMJAV
Creators :
An Zhenyu
This dissertation aims to explore the adoption of fintech to improve the efficiency, stability, and social impact of microfinance institutions (MFIs) for financial inclusion in Laos. In Chapter 2, we delve into the current state of financial inclusion in Laos and identify the primary barriers and challenges obstructing its progress. Additionally, we analyze the role of MFIs in advancing financial inclusion within the country. In Chapter 3, we examine MFI performance and credit default risk using CAMEL rating systems, allowing us to gain a comprehensive understanding of their financial health when extending loans to underserved populations. The findings highlight the importance of MFIs' risk management and financial stability in advancing greater financial inclusion.
Chapter 4 concentrates on the role of fintech, exploring its potential benefits and risks for enhancing the efficiency, stability, and social impact of MFIs in promoting financial inclusion in Laos. This study establishes the groundwork for fostering more inclusive and sustainable financial practices in the country. Furthermore, it emphasizes the necessity of addressing fintech-related risks as well as balancing the relationship and transaction banking to fully maximize its potential for MFIs seeking to enhance their efficiency, stability, and social impact through fintech adoption.
To understand the factors that affect fintech adoption in MFIs, we develop a theoretical model in Chapter 5 by extending the Technology Acceptance Model (TAM) with perceived risk, government support, and regulation. Surveying managing directors from MFIs provides useful data, and the effectiveness of the extended TAM is validated through Structured Equation Modeling (SEM). This study contributes to theoretical development by enriching TAM with additional variables. Applied this extended model in the context of MFIs in Laos, it provides a more comprehensive understanding of fintech adoption, strengthening TAM's credibility, and contributing to a robust theoretical framework for fintech adoption within the scope of MFIs. Consequently, our study provides practical guidance for practitioners seeking to strengthen influential factors and overcome obstacles in the fintech adoption of MFIs.
Through an examination of the situation of financial inclusion in Laos, the role of MFIs in driving financial inclusion, their performance, credit default risk, and fintech adoption, this dissertation demonstrates the potential of fintech and its role in improving the efficiency, stability, and social impact of MFIs for financial inclusion in Laos. Ultimately, it may contribute to the advancement of the country's financial ecosystem and support societal progress.
Creators :
SOMESANOOK PHONGSOUNTHONE
Creators :
Nakamura Takemasa
Reinforced Concrete (RC) has been extensively used in the construction of buildings and infrastructure facilities. Particularly, RC bridge piers have been widely utilized in the construction of highways, mountainous, and river elevated bridges due to their cost-effectiveness, ease of construction, durability, seismic resistance, and corrosion resistance. In the design and construction of bridge piers, the bond performance between reinforcement and concrete is crucial. Ensuring sufficient bond strength between the materials is essential for reliable stress transmission. In most RC structures, deterioration of bond strength between reinforcement and concrete in column boundaries and within footings leads to slippage phenomena, reducing the column’s load-bearing capacity and rigidity, resulting in a decrease in the seismic performance of RC structures.
Previous studies have shown that the diameter and arrangement of axial bars significantly affect the bond performance at the joint. Therefore, in bridge piers with densely arranged small-diameter axial bars, the bond between axial bars and footing concrete may be lost due to decreased anchorage performance, possibly changing the failure mode from flexural failure, as assumed in current designs, to a failure mode caused by rocking deformation.
In this study, considering the above background, cyclic loading tests and finite element analysis based on reduced-scale RC column models, consisting of different diameters and numbers of axial bars with similar reinforcement ratios and strengths, were conducted. Through these, the influence of bond-slip phenomena in RC bridge piers with densely arranged small-diameter axial bars on the seismic reinforcement performance of RC columns was investigated. The structure of this paper is described below.
In Chapter 2, cyclic loading tests using RC column specimens with densely arranged small-diameter axial bars, having similar reinforcement ratios and strengths compared to the standard reduced-scale RC bridge pier models commonly used in previous studies, were conducted. The influence of small-diameter axial bars on the deformation and load-bearing performance and failure mechanisms of RC columns was compared with standard specimens. Specifically, analyses and considerations were made regarding the strain history of axial bars at loading stages, load-strain relationship history, damage conditions of reinforcements inside the specimens, and rotational deformation behaviors calculated from vertical displacements on both sides of the column base.
In Chapter 3, reproduction analysis of cyclic loading tests based on nonlinear finite element methods was conducted. It was clarified that it is necessary to consider the bond between axial bars and concrete. A new modeling method to reproduce the bond-slip phenomena between axial bars and concrete in RC columns was proposed. In these numerical analysis methods, focusing on the bond-slip behavior of reinforcements at the joint and differences in bond failure characteristics caused by different reinforcement arrangements, detailed analyses were conducted on how they affect the overall deformation and load-bearing performance of RC columns. From these analyses and considerations, the performance and failure mechanisms of RC columns with densely arranged small-diameter axial bars were summarized.
In Chapter 4, the possibility of seismic reinforcement for RC columns with small-diameter axial bars was verified. Even now, various reinforcement works are being conducted for existing transportation infrastructure facilities for reasons such as improving the seismic performance of RC bridge piers, extending the life of aging structures, and taking measures against imminent heavy rain disasters. In the case of existing RC bridge piers designed and constructed based on old seismic standards, many of them use smaller diameter axial bars compared to current standards and do not have sufficient flexural strength. Also, in reinforcement, it is necessary to select a construction method that comprehensively considers seismic resistance, durability, workability, and economy. Especially when applying to river piers, it is necessary to smoothly construct within a limited construction period, and in some cases, a reinforcement method with a thin wrapping thickness is chosen to reduce the riverbed occupancy rate and maintain its performance for a long time. Since it is unclear whether the reinforcement effect can be sufficiently expected even if reinforcement is performed, cyclic loading tests were conducted on specimens reinforced with PCM materials for RC columns with insufficient deformation performance due to such reinforcements and anchorage conditions, and the load-bearing deformation performance was evaluated. Detailed verification was conducted focusing on the suppression effect of anchorage failure of axial bars and rotational deformation in the plastic hinge section caused by bond failure. It was clarified that the high-strength PCM material pouring reinforcement method can suppress the anchorage failure of existing part reinforcements and the rocking deformation of the existing part.
In Chapter 5, verification based on nonlinear finite element methods was conducted on the specimens reinforced in the previous chapter, focusing on the suppression effect of anchorage failure of axial bars in the existing part and rocking deformation due to the wrapping reinforcement of PCM materials targeted in cyclic loading tests. By appropriately modeling the PCM reinforced part and the reinforced part reinforcements, it was possible to reproduce the pinching phenomena observed in the unloading and reloading history of cyclic loading tests, and it was clarified that the rocking deformation of the plastic hinge part caused by bond failure at the base of the specimen could also be suppressed.
Finally, the conclusions of each chapter were summarized, and a comprehensive summary of the research results on the seismic reinforcement performance of RC bridge piers with densely arranged small-diameter axial bars focusing on bond-slip behavior was conducted. Also, unresolved issues in this study were raised, and descriptions were made regarding future research issues.
Creators :
SHAO PEILUN
Creators :
AMANDANGI WAHYUNING HASTUTI
Creators :
LOONG GLEN KHEW MUN
製品にはプロダクトライフサイクルがあり,その段階ごとに要求される開発内容も変化するため,市場における製品のイノベーションの状態を把握することが重要である. また,ドミナントデザイン発現前に製品を市場に投入することは,製品が広く受け入れられるための有効な手段の一つであるといわれている. しかし,ドミナントデザイン発現時期は,事後にしかわからないという問題があり,その対策として,特許情報を使った手法が多くの研究者によって検討されているが,製品に関する技術の専門家が必要であることが課題である. 例えば,分析に使う特許分類コードや技術の専門用語が特定できないことである. 特許情報を使った先行研究では,特許分類コードを使った方法,テキストマイニングを使った方法,機械学習・ディープラーニングを使った方法があるが,製品に関する技術の専門家の知見が必要とされている.そこで,製品に関する技術の専門家の判断によらず,イノベーションの状態変化,ドミナントデザインの発現時期を得る方法に対する社会的要求がある.
本研究では,製品に関する技術の専門家の判断によらず,日本の特許情報と分類コードのFタームを使い,ドミナントデザインの発現時期を得る新たな手法を提案した.また,分析手法の有用性の検証として,製品の開発事例を元に,ドミナントデザインの発現時期を示すことができることを確認した.ここで,検証には,組み立てて完成する製品で,精密機器・装置分野の製品であり,かつFタームが付与されているものを対象とした.
本論文は,以下の4章から成る.
1章では,研究の背景を述べ,先行研究の調査を行った. 先行研究の課題を認識し,取り組む課題を考え,本研究の目的を定めた. さらに,本論文の構成のアウトラインを示した.
2章では,製品に関する技術の専門家の判断によらず,Fタームを使い,イノベーションの状態変化,ドミナントデザイン発現時期を捉える新たな手法を提案した. 提案した新たな手法では,まず,分析対象の製品に関する特許を選定するために,特許分類コードのFIを特定する必要がある.FIを特定する手法の検討では,カメラを対象とし,製品を表す一般的な単語からFIを求めることができることを示した.また同時に,コア技術を表す特許分類コードを顧客の声から特定するため,日本の農業用草刈り機メーカーを分析し,コア技術を表すテーマコードが特定できることを示した.つぎに,イノベーションの状態変化とドミナントデザインの発現時期を得る手法を確認するために,先のそれぞれの結果及びFタームを用いて,製品に関するFIから,Fタームを特定し,Fタームから,イノベーションの状態変化を求めた.求めたイノベーションの状態変化から,AbernathyとUtterbackが提唱したA-Uモデルの条件を適用し,ドミナントデザインの発現時期を特定できることを示した.また,インクジェットプリンタ,NC加工機,プロジェクタについて分析し,ドミナントデザイン発現時期を求め,製品のドミナントデザイン時期と比較した結果も示している.
3章では,2章で示した,新たな手法の有用性を検証した.有用性の検証は,製品開発に成功した日本企業を事例として分析し,2章で得た結果と比較した.この分析企業は,日本の業務用可食インクジェットプリンタ市場において,最も早く製品を投入した企業の一つであり,市場においてトップシェアを獲得している.製品開発の歴史と,ドミナントデザイン発現時期の分析結果を比較し,製品開発に着手したタイミング,製品を市場へ投入するタイミングは,ドミナントデザインの発現時期の前であることを示した.このことから,本研究で提案する手法は,製品開発において,製品を市場へ投入するタイミングを判断する際に有用であることを確認した.さらに,製品開発においては,ドミナントデザイン発現時期を鑑みた製品を市場へ投入するタイミングだけではなく,ターゲット選定,独自性や品質を実現する技術開発も重要であるため,事例でとりあげた企業の事業戦略,実行計画についても調査した.事例企業は,製品を市場へ投入するタイミングを定め,目標時期までにターゲット選定,製品の独自性を実現するための課題認識,課題解決のための技術開発を,戦略的,計画的に実施しており,このことからも,本研究で提案した新たな手法は,事業戦略において,製品を市場へ投入するタイミングを決定することに活用できることを示唆している.
4章では,2,3章を総括した. 2章,3章の成果をまとめ,本研究で新たに提案した特許情報と日本の特許分類コードのFタームを使った特許分析手法は,組み立てて完成する製品で,精密機器・装置分野の製品であり,かつFタームが付与されている製品において,製品に関する技術の専門家の判断によらずドミナントデザインの発現時期を特定できることを示し,企業の製品開発事例の検証から,提案した新たな手法の有用性を示唆している.また,本研究の限界と今後の展望,さらに,その実現のために取り組むべき課題を述べた.
Creators :
石井 好恵
In the dynamic global business landscape, finance and innovation stand as the twin pillars of corporate success. Both finance and innovation are vital for a company's long-term viability, demanding a harmonious interplay between prudent financial management and a culture that fosters innovation. Research in the field examining the relationship between a firm's finance and innovation is rapidly growing, offering profound insights into the dynamics shaping organizational success. While many empirical studies traditionally presumed that financial support drives innovative efforts, alternative perspectives support the reverse causation hypothesis, suggesting that innovation can stimulate financial performance. The current corporate management research often takes a segmented approach, focusing on either the signaling effect of innovation on financial performance or the influence of finance on innovation performance. While insightful, this segmented approach resembles examining separate puzzle pieces without considering the whole picture.
We contend that finance and innovation are mutually interdependent, influencing each other. Our study uniquely explores both dimensions, investigating how financial resources stimulate innovation, and how innovation, primarily represented by patents, attracts investors and secures financial support. Focusing on Japanese corporations, our research provides a distinctive perspective due to Japan's diverse business landscape, strong patent system, and commitment to innovation. Japan's risk-averse market and global competition highlight the importance of innovation and the role of patents as signals for economic growth.
The first study scrutinizes the intricate relationship between financial resources and firms' innovation outputs, exploring the influence of various financing sources, internal and external, inspired by the Pecking Order Theory. It involves a sample of 113 Japanese manufacturing firms listed on the JASDAQ market, using patent-based metrics to gauge technological innovation. The study highlights the crucial roles played by both internal and external financial resources in driving innovation outputs. Firms demonstrate a strong preference for self-generated financing, particularly internal funding. Additionally, the research unveils the complementary impact of debt financing, especially when internal resources are depleted, aligning with the Pecking Order Theory's risk principles.
In our second study, we explore the reverse causation between innovation and finance, particularly during initial public offerings (IPOs). IPOs are pivotal, as they provide capital for growth and enhance a firm's reputation. However, information asymmetry poses a challenge, leading investors to rely on quality signals. We hypothesize that patents, as a proxy for innovation, mitigate information asymmetry because their information is verifiable, observable, and entails maintenance costs. Thus, a company with numerous patents before an IPO is likely to gain investor trust, leading to a more successful IPO. We analyze 338 newly listed Japanese firms across various industries, finding robust positive correlations between pre-IPO patent applications and IPO financial performance. This contribution enriches the literature on the impact of patents on IPO performance and illuminates the broader influence of innovation on finance.
The third study delves into the dynamics of patent signaling within IPO firms, distinguishing between high-tech and low-tech sectors. High-tech firms often face more information asymmetry, with less transparency in R&D and patent disclosures, making them riskier for investors. Low- tech firms, with valuable patents and balanced resource allocation, are more accessible to investors. This raises the question of whether high-tech firms are less successful in using patent signals to raise total capital during the IPO process, as previous research has mainly focused on high-tech firms in technology-intensive markets. While prior studies often grouped all IPOs together or concentrated on specific industries, our study adds fresh insights to the entrepreneurship and innovation landscape by asserting that patents exert a more substantial influence on IPO success for low-tech companies in comparison to their high-tech counterparts. This observation underscores the necessity for an in-depth exploration of the patent signaling mechanism in IPOs, especially for low-tech firms characterized by simpler innovation portfolios and tangible assets appealing to risk-averse investors.
Overall, our dissertation offers a comprehensive exploration of the interplay between finance and innovation in Japanese corporations, providing nuanced insights into the implications of this symbiotic relationship for businesses, policymakers, and scholars worldwide.
Creators :
LE THUY NGOC AN
In China, there are about 800,000 congenital diseases among 20 million newborns, of which nearly 200,000 fetuses have serious defects or diseases. The birth of these sick fetuses brings serious economic burden and social problems to the family and even the society. It is therefore important to carry out early fetal monitoring in order to detect fetal defects and diseases as early as possible. Umbilical artery blood signals contain important information about fetal growth and development, reflecting various problems during pregnancy, such as intrauterine g rowth r etardationetardation(IUGR), hypoxia and maternal hypertension, which can be determined by umbilical artery blood signals. Therefore, the analysis of umbilical artery blood signals is important for prenatal monitoring and fetal health status diagnosis.
The acoustics pectral parameter method is a conventional technique for analyzing the umbilical artery blood signals and consists of three parameters that serve as clinical diagnostic criteria: resistance index (RI), pulsesatility index (PI) and maximum systolic/end diastolic umbilical flow velocity (S/D). However, these parameters ignore phase properties of the signal, such as phase delay, phase frequency and phase mode, and focus only on the fundamental statistical parameters of blood velocity, s uch as maximum, minimum and mean values. This may lead to clinical misdiagnosis.
Umbilical artery blood signals have complicated structures and nonlinear characteristics in addition to changes in signal amplitude. This paper presents a comprehensive new approach for characteristics parameter extraction and classification of umbilical artery blood signals using fractal theory and Chaos theory in order to handle these complex structures and nonlinear properties of the signal. First, by focusing on the fract al characteristics of umbilical artery blood signals, the fractal dimension (BD) and the correlation dimension (CD) are obtained to verified that BD is positively correlated with the gestational week and CD is effective in discriminating normal from abnormal. Next, we obtain the maximum Lyapunov exponent (MLE) of the chaotic characteristics of umbilical artery blood time series, and verified its effectiveness in distinguishing normal signals from abnormal signals. Finally, a diagnostic model is proposed b y applying particle swarm optimization support vector machine (PSO SVM) to the conventional feature parameters (RI, PI, S/D) and newly obtained parameters (BD, CD, MLE) to classify and diagnose the umbilical blood signals in the four statuses (normal, oligohydramnios, umbilical cord around neck, fetal malposition).
This doctoral dissertation consists of 6 chapters.
Chapter 1 introduces the background and means of umbilical artery blood study as well as reviewing the current re search situation. The outline o f this dissertation is also given.
In chapter 2, the fundamentals of fetal hemodynamics are described. The clinical significance and normal reference values of umbilical artery blood signal parameters are outlined. Details of the umbilical artery signal acquisition equipment, data classification and acquisition process are explained.
In chapter 3,the fractal dimension box counting method (BD) and the correlation dimension (CD) are used to investigate the nonlinear characteristics of the umbilical artery blood signals based on fractal theory. First, the BD of the umbilical artery blood signals is calculated and the fractal characteristics of the signals are analyzed. Results show a positive relationship between the fractal dimension of umbilical artery blood signals and gestational weeks. A bnormal and normal umbilical artery signals are then classified into abnormal group and n ormal group. T h e Grassberg P rocaccia algorithm (GP algorithm) is used to calculate and analyze the CD of the two groups. T he overall CD of normal umbilical artery blood signals is greater than that of abnormal signal s. CD is significantly better at discriminating the normality of the umbilical artery blood signal compared to conventional parameters. Furthermore, t he Hurst exponent of umbilical artery blood signal is calculated and analyzed by Lo method. The results show that umbilical artery blood signal belong s to non sta tionary signal and show obvious “1/f fluctuation” characteristics.
In chapter 4,c haotic phase space diagram method and m aximum L yapunov e xponent (MLE) are used to determine the chaotic characteristics of umbilical artery blood signals from qualitative and quantitative perspectives. The attractor reconstruction of umbilica l artery blood signals is performed in t hree d imension (3D) and t wo d imension (2D) phase space. The results show that the chaotic phase diagram of the time series for abnormal umbilical artery signals show a jumbled “ball of wool” state and the chaotic “shape” appears to converge. Application of the r eceiver o perating c haracteristic (ROC) curve to the obtained maximum Lyapunov exponent (MLE) shows that the rate of discrimination of normality of the umbilical artery blood signal is significantly better than the conventional feature parameters.
In chapter 5,an artificial intelligent classifier is proposed to classify the four states of umbilical artery blood signals (normal, oligohydramnios, umbilical cord around neck and fetal malposition). The support vector machine (SVM) classifying method is constructed based on the conventional parameters, S/D, PI and RI. The particle swarm optimisation support vector machine (PSO SVM) classifier is also constructed using the fractal dimension (BD), correlation dimension ( CD) and maximum L yapunov exponent (MLE) derived in Chapters 3 and 4 as feature parameters. The results of the classification tests show that the PSO SVM classifier is more accurate , confirming the usefulness and effectiveness of the proposed classification method.
In Chapter 6, summary of this dissertation and future work are described.
Creators :
YU KAIJUN
The COVID-19 pandemic has significantly transformed higher education, shifting it from traditional classrooms to online platforms. This change requires reassessment and adaptation of educational methods, particularly student assessment. Online formative assessments have become essential for improving teaching and learning outcomes because they provide immediate feedback, enable interactive support, and encourage selfassessment, thereby playing a key role in the learning process.
The multiple-choice test is widely used to assess students. However, the inherent nature of multiple-choice questions poses the risk of obtaining correct answers, even without a genuine understanding of the content. To mitigate this issue, typical measures involve increasing the number of questions. To address this concern, this study implemented a new constraint aimed at enhancing the inherent characteristics of the multiple-choice format. This research objective focuses on investigating innovative scoring methods for formative assessments in online courses that can improve learning in higher education within the context of Yamaguchi University.
This study evaluated the effectiveness of this learning assessment method by employing multiple-choice questions, presenting a practical and efficient approach for online formative learning assessment designed to assess a large student cohort. The new scoring method in this study extends Ikebururo's concepts that introduce partial scoring systems in MCQ design, driving the creation of a new scoring system centered on the "degree of matching. This approach involved comparing the alignment between student responses and the instructor's design, resulting in a detailed five-level scoring system for four-choice questions. This scoring method hinges on evaluating how closely students answers align with the instructor's intended choices. Each question, with its four choices, is akin to a binary process, represented by a 4-digit binary number. Each digit in this comparison corresponds to a specific choice, allowing for a granular assessment of the match between student selection and the ideal answer. This innovative approach steps away from the conventional pass-fail binary system, offering a spectrum of evaluation outcomes. It provides a better understanding of students comprehension by gauging the extent of the alignment between their choices and the instructor's design.
This method can enhance assessment accuracy by capturing the subtleties of student responses beyond mere correctness, earning partial points for partial knowledge or progress via multistep reasoning, promoting critical thinking, recognizing the importance of incremental progress, and capturing the depth of a respondent's knowledge.
Initially, an extensive literature review established a theoretical framework, identifying gaps in the current understanding of online formative assessments. Subsequently, the study examined data collected from graduate students in the 'Advanced Research and Development Strategies' course at Yamaguchi University. The data span two academic years, 2019 and 2020, and provide a comparative view of face-to-face and online Lecturer Formats.
Furthermore, the k-means clustering algorithm was used to analyze student performance using formative assessment scores. This method categorizes student performance into distinct clusters, revealing insights into individual learning behaviors. The k-means method, a popular technique in data mining and pattern recognition, efficiently groups data into 'k' clusters. It is effective for large datasets and versatile across various data types. The technique involves steps such as initialization, assignment, centroid updating, and convergence checking, and is instrumental in identifying performance patterns, enabling the development of more focused educational strategies.
The results demonstrate the potential of the four-choice multiple-choice scoring method to revitalize online formative assessments. The key contributions of this study are as follows:
・Innovative Scoring Method: This study shows how the four-choice method can lead to more dynamic and engaging online assessments. This approach captures student performance more accurately and encourages deeper engagement with the material.
・Enhanced Student Engagement and Understanding: The new four-multiple-choice scoring method significantly affected student engagement and understanding. This fosters an environment in which students are more actively involved in their learning processes, contributing to better comprehension and retention of material.
・Practical Implications for Educators and Institutions: The need to adapt assessment strategies for digital learning, focusing on continuous feedback and personalized learning.
・Educational Technology Contribution: Key insights into adapting assessment strategies for digital learning, emphasizing continuous feedback, and personalized learning.
This dissertation presents a comprehensive examination of new assessment techniques in the context of online learning. This provides a critical roadmap for educators and institutions to adapt to the digital educational environment for more effective and engaging assessment practices in online higher education.
Creators :
SONEPHACHANH MALAYPHONE
Creators :
GERDPRASERT THANAWIT
As the population ages, the demand for elderly care services will continue to increase, which includes providing specialized care, daily life support, and medical health services. As a result, informal caregiving provided by non-professionals such as family, friends, neighbors, and volunteers is becoming more prevalent. Injuries that occur during caregiving can affect the caregiving’s life, especially their mental and physical health. Therefore, the correct positioning and posture during caregiving are crucial to prevent musculoskeletal disorders among caregivers. Although training programs are useful to reduce the risk of musculoskeletal disorders for informal caregivers, many of them express that it is still difficult for them to grasp the correct caregiving postures. Moreover, they struggle to obtain professional advice to correct their posture through long-term practice. Therefore, finding a targeted ergonomic posture risk assessment and guidance method is crucial to improve caregivers' posture-related risks, enhance work efficiency, and safeguard their physical health.
Rapid Entire Body Assessment (REBA) is a postural risk assessment method based on ergonomics that has been attracting attention recently, and it basically evaluates the risk from the angle of each joint of the body. However, in caregiving movements, the way of load placed on the caregiver and the time to maintain the movements vary greatly depending on the weight and posture of the cared person, so the current risk assessment using REBA is insufficient for caregiving movements. Additionally, posture recognition algorithms such as OpenPose are often used to extract skeletons. With these techniques, problems such as missing skeletons or misrecognition often occur due to image conditions or the overlapping of multiple people, and skeleton extraction may sometimes fail.
In this research, the Spatial Temporal Graph Convolution Network (ST-GCN) is applied to develop a technique for complementing missing skeletons based on behavioral features and a technique for correcting skeletons that are misrecognized due to overlapping people, and to improve the accuracy of calculating skeletal joint angles. In order to evaluate caregiving posture risk more appropriately, some parameters such as center of gravity trajectory, load duration, asymmetric load during caregiving movements are investigated and a new REBA method is proposed.
This paper consists of six chapters.
In Chapter 2, to solve the problems of skeleton misidentification and missing information by OpenPose an improved skeleton reconstruction method based on ST-GCN is propose. The method compensates for missing skeletons in terms of behavioral features and corrects incorrectly identified skeletons based on skeleton weight features. This approach improves the accuracy and robustness of pose recognition and allows more accurate estimation of skeletal joint angles and its REBA score.
In Chapter 3, to address the issue of REBA evaluation scores being too high for caregiving scenarios, a postural risk assessment method (C-REBA) is proposed by considering the characteristics of caregiving task. Customize the traditional REBA method and add parameters such as center of gravity trajectory, load duration, and asymmetric loading to the evaluation score. the caregiving movements to assist in transferring from a bed to a wheelchair on a group of experienced nurses and a group of inexperienced caregivers are analyzed and the effectiveness of the C-REBA method is verified.
In Chapter 4, a method that combines the ST-GCN framework and C-REBA for postural risk assessment is proposed. The deep neural network algorism is applied to learn motion features and additional features such as load duration, motion frequency, center of gravity variation, and asymmetric load. So that all evaluation parameters for C-REBA rules can be obtained automatically. With this method, postural risk assessment processes in caregiving operations can be performed automatically.
In Chapter 5, "Behavior Analysis and Posture Assessment System" (BAPAS) is developed. BAPAS is a system aimed at assessing the risk of musculoskeletal disorders related to working postures in medical support work. This chapter introduces the functions and usefulness of this system and demonstrates how this system can be extended to other medical fields easily by setting parameter is settings.
Chapter 6 provides a summary of the paper as a whole and future prospect.
Creators :
Han Xin
Creators :
Mukaida Mashiho
In recent years, not only mRNA (messenger RNA) but also other small non-coding RNA have focused on molecular diagnosis and therapy in oncology fields. Especially in human medicine, many studies elucidate the ability and function of many microRNAs, which are small non-coding RNAs. However, there are still not many studies in the veterinary field. In my PhD study, I focused on the non-coding small RNA in canine oncology fields. In the first chapter, I studied the dysregulated micro RNA in canine oral melanoma. At first, I performed the microarray-based miRNA profiling of canine malignant melanoma (CMM) tissue obtained from the oral cavity. Then, I also confirmed the differentially expressed microRNA by quantitative reverse transcription-PCR (qRT-PCR). An analysis of the microarray data revealed 17 dysregulated miRNAs; 5 were up-regulated, and 12 were downregulated. qRT-PCR analysis was performed for 2 up-regulated (miR-204 and miR-383), 3 down-regulated (miR-122, miR-143, and miR-205) and 6 additional oncogenic miRNAs (oncomiRs; miR-16, miR-21, miR-29b, miR-92a, miR-125b and miR-222). The expression levels of seven of the miRNAs, miR16, miR-21, miR-29b, miR-122, miR-125b, miR-204, and miR-383 were significantly up-regulated, while the expression of miR-205 was down- 2 regulated in CMM tissues compared with normal oral tissues. The microarray and qRT-PCR analyses validated the up-regulation of two potential oncomiRs, miR-204 and miR-383. I also constructed a protein interaction network and a miRNA–target regulatory interaction network using STRING and Cytoscape. In the proposed network, was a target for miR-383, and were targets for miR-204, and was a target for both. The miR-383 and miR-204 were potential oncomiRs that may be involved in regulating melanoma development by evading DNA repair and apoptosis. In my second chapter, I focused on non-coding RNA other than microRNA, and I compared canine hepatocellular carcinomas (HCC) and hepatocellular adenomas (HCA). I elucidated the differential expression of Y RNA-derived fragments because Y RNA-derived fragments have yet to be investigated in canine HCC and HCA. I used qRT-PCR to determine Y RNA expression in clinical tissues, plasma, and plasma extracellular vesicles, and two HCC cell lines (95-1044 and AZACH). Y RNA was significantly decreased in tissue, plasma, and plasma extracellular vesicles for canine HCC versus canine HCA and healthy controls. Y RNA was decreased in 95-1044 and AZACH cells versus normal liver tissue and 3 in AZACH versus 95-1044 cells. In plasma samples, Y RNA levels were decreased in HCC versus HCA and Healthy controls and increased in HCA versus Healthy controls. Receiver operating characteristic analysis showed that Y RNA could be a promising biomarker for distinguishing HCC from HCA and healthy controls. Overall, the dysregulated expression of Y RNA can distinguish canine HCC from HCA. However, further research is necessary to elucidate the underlying Y RNA-related molecular mechanisms in hepatocellular neoplastic diseases. To the best of my knowledge, this is the first report on the relative expression of Y RNA in canine HCC and HCA. In conclusion, I have demonstrated the up-regulation of potential oncomiRs, miR-16, miR-21, miR-29b, miR-122, miR-125b, miR-204 and miR383 in CMM tissues. In particular, the strong up-regulation of miR-383 in CMM tissues compared with normal oral tissues identified by microarray screening was confirmed by qRT-PCR. I conclude that miR-383 and miR-204 may promote melanoma development by regulating the DNA repair/checkpoint and apoptosis. Then, I also demonstrated the Y RNA dysregulation in the cHCC. Especially to my knowledge, this is the first report on Y RNA in canine tumors. Interestingly, this ncRNA has distinctive characteristics and differentiates malignant tumors (HCC) from benign 4 tumors (HCA). The expression pattern of Y RNA is consistent across clinical samples and cell lines. Thus, Y RNA has promising potential for differentiating HCC from HCA. Further research is required to fully elucidate the role of Y RNA in the development and progression of canine HCC and HCA.
Creators :
Ushio Norio
In Japan, China, and Singapore, several studies have reported increased incidences of peripheral venous catheter-related bloodstream infection by Bacillus cereus during the summer. Therefore, we hypothesized that bed bathing with a B. cereus-contaminated “clean” towels increases B. cereus contact with the catheter and increases the odds of contaminating the peripheral parenteral nutrition (PPN). We found that 1) professionally laundered “clean” towels used in hospitals have B. cereus (3.3×10^4 colony forming units (CFUs) / 25cm^2), 2) B. cereus is transferable onto the forearms of volunteers by wiping with the towels (n=9), and 3) B. cereus remain detectable (80∼660 CFUs /50cm^2) on the forearms of volunteers even with subsequent efforts of disinfection using alcohol wipes. We further confirmed that B. cereus grow robustly (10^2 CFUs /mL to more than 10^6 CFUs /mL) within 24hours at 30°C in PPN. Altogether we find that bed bathing with a towel contaminated with B. cereus leads to spore attachments to the skin, and that B. cereus can proliferate at an accelerated rate at 30°C compared to 20°C in PPN. We therefore highly recommend ensuring the use of sterile bed bath towels prior to PPN administration with catheter in patients requiring bed bathing.
Creators :
Hino Chieko
Creators :
Matsukuma Haruka
Creators :
Zheng Huan Yu
Creators :
Kambayashi Yoshinori
本研究では、TNF-αを含む炎症性サイトカインが音響障害による難聴に関連していることを踏まえ、TNF-αを標的とするモノクローナル抗体であるアダリムマブが、激しい音響への暴露から内耳を保護する可能性を検討した。本研究では、アダリムマブをマウスに投与し、内耳への影響を評価するとともに、アダリムマブの内耳への移行、聴器毒性、音響暴露への影響に関する実験も行った。実験結果から、アダリムマブは投与後に蝸牛に部分的に到達したが、音響暴露に対する感受性を高め、内耳での有毛細胞の損失が増加したことが示された。TNF-αは潜在的な治療標的になりうると考えられていたが、本研究結果からはTNF-αの過剰な抑制が内耳に悪影響を与える可能性があることが示唆された。本研究では、抗マウスTNF-α抗体の代わりに抗ヒトTNF-α抗体であるアダリムマブを使用したことや、内耳の保護を改善するために他のサイトカインの抑制を検討する必要性があることなど、いくつかの限界があることを認識している。結論として、アダリムマブの投与は、おそらく過度の TNF-α抑制により、音響暴露に対する内耳の感受性を高め、より重大な有毛細胞の損傷を引き起こす可能性があることが判明した。
Creators :
山本 陽平
目的:日本人のジェノタイプ1型C型慢性肝炎患者を対象に、異なる肝臓の線維化ステージで治療を開始する場合の、3 種類の直接作用型抗ウイルス薬(DAA: sofosbuvir-ledipasvir(SL)、glecaprevir-pibrentasvir(GP)、elbasvir plus grazoprevir(E/G))における費用対効果の評価を行うことを目的とした。
方法:異なる線維化ステージで適用される治療戦略の費用対効果を評価するために、線維化ステージの進行を反映した判断モデルを作成した。すなわち、線維化ステージに関係なく全ての患者を治療する戦略(TA)、肝線維化進行の4つのステージ以上の患者を治療する戦略(F1S:ステージF0 では治療を控え、ステージF1以上の患者に治療を開始、同様にF2S、F3S、F4Sではそれぞれのステージ以上の患者に治療を開始)、抗ウイルス剤治療を行わない(NoRx)の6つの治療戦略を比較した。費用対効果は、一生涯の時間水平で、日本における医療保険支払い者の視点で検討を行った。
結果:基本分析では、線維化が進行したステージF2の患者で治療開始する戦略(F2S)と比較して、全ての患者への治療を行う戦略(TA)において、SL、GP、E/G それぞれの質調整生存年(QALY)の増加分は0.32〜0.33であった。また、QALYあたりの増分費用効果費(ICER)はそれぞれ、24,320 米ドル、18,160 米ドル、17,410米ドルであった。費用対効果許容度曲線では、50,000米ドルの支払意思閾値においてTAが最も費用対効果が高く、3つのDAAともその支払意思閾値以内であった。
結論:日本人のジェノタイプ1型C型慢性肝炎患者において、線維化ステージに関係なく、すべての患者へ直接作用型抗ウイルス薬治療を行うことが、通常の条件下では費用対効果に優れていることが示唆された。
Creators :
末永 利一郎
Many mail filtering methods have been proposed, but they have not yet achieved perfect filtering. One of the reasons for this is the influence of modified words created by spammers to slip through the mail filtering, in which words are modified by insert symbols, spaces, HTML tags, etc. For example,“ price$ for be$t drug$! ”,“ priceC I A L I S ”, “ <font>se</font>xu<font>al</font> ”, etc. These are frequently replaced with new strings by changing the combination of symbols ,HTML tags etc.
Mail filtering is a technique that captures trends in words in training mails (mails received in the past) and applies these trends to words in test mails (newly received emails). Some of the above modified words appear in both training and test mails, i.e., words that could be used as features of spam mail by using them unprocessed, while others appear only in test mails, i.e., words that have not been learned and require special processing (e.g., removal of symbols, search for similar words, etc.) for their use. However, existing methods do not make these distinctions and treat them in the same way.
Therefore, in order to bring the filtering performance of the existing methods closer to perfect filtering, we developed a method in which the above modified words are separated into words that appear in both training and test mails and words that appear only in test mails, and each of these words is used for mail filtering.
In this study, we treat the above modified words as ”strange words”. Typical examples of such strange words include, in addition to the above, new words included in ham mails, proper nouns used in close relationships, and abbreviations.
The results of this study are as follows
(1) In order to compare the filtering performance between strange words and other words, filtering experiments were conducted using existing methods with strange words, nouns, verbs, and adjectives. The results showed that the filtering performance of the strange words was the best. This means that strange words have a significant impact on the filtering performance, and we expect to improve the filtering performance of existing methods by developing a new method to utilize strange words.
(2) In order to examine the breakdown of strange words, we counted the number of words that appeared in both training and test mails, and the number of words that appeared only in test mails. The results were compared with those obtained for nouns, verbs and adjectives. We found that there are a significant number of strange words that appear in both training and test mails, but only in one of the groups, i.e., ham or spam mail. Words with this appearance pattern are most useful for mail filtering. On the other hand, we found that there are many strange words that appear only in test mails, i.e., words that cannot be learned. We expect to improve the filtering performance by separating these strange words and developing a new method to use each of them.
(3) For the use of strange words, we developed (A) a method for using words that appear in both training and test mails, and (B) a method for using words that appear only in test mails, respectively.
(A) To examine the breakdown of strange words that appear in both training and test mails, we divided them into two categories: words that appear only in ham and spam mails, i.e., words with patterns that improve filtering performance, and words that do not, and examined their frequency of occurrence. The results showed that the words with appearance patterns that improve filtering performance tend to appear more frequently than those without such patterns. This means that by using words with a certain number of occurrences in filtering, it is possible to use more words that improve filtering performance. We developed a method to do this and conducted experiments with different threshold values to find the optimal value, and confirmed that setting the threshold around 7 improves filtering performance.
(B) We compared the number of strange words that appear only in the test mails between ham and spam mails, and found that the number tends to be higher in spam mail than in ham mail. In order to utilize this difference for filtering, we proposed a method to set a uniform spam probability for strange words that appear only in the test mails, and attempted to find the optimal spam probability. As a result, setting the spam probability to 0.7 improved the filtering accuracy from 98.2% to 98.9%.
By using (A) and (B) above together, both words that appear in both training and test mails and words that appear only in test mails can be used for mail filtering to increase accuracy.
Mail filtering has been improved and its performance has reached its limit. In order to further improve accuracy, i.e., to approach perfect filtering, a new perspective is needed, and this paper provides one such perspective: the use of strange words.
This paper is organized as follows.
In Chapter 1, we review the background of mail filtering methods, discuss how spammers use strange words to slip through such filters. The purpose and structure of this paper are then presented.
In Chapter 2, we will discuss related research on examples of filtering methods that have been proposed so far are given.
In Chapter 3, we describe the mail datasets, word handling, and strange words used in the this paper. This is followed by an explanation of the ROC curve, which is the measure used to evaluate the filtering performance, and explanation of scatter plots and box-and-whisker plots.
In Chapter 4, we compare the filtering performance between strange words and other words, and show that strange words have a significant impact on the filtering performance. Furthermore, based on the results of a breakdown of the number of strange words, we discuss the possibility of improving filtering performance by separating words that appear in both training and test mails from those that appear only in the test mails. We will work on this in the next chapters and report the results.
In Chapter 5, we develop a method to use (A) above, i.e., strange words that appear in both training and test mails. From the results of counting the number of words used in the subject and body of each email, we show that the number tends to be smaller for words that degrade the filtering performance. Based on these results, we propose a method that sets a threshold for the number of words used in the subject and body of mails, and uses only those words that exceed the threshold for classification. Experiments are conducted to find the optimal value by varying the threshold, and the effect of this method on performance is reported.
In Chapter 6, we develop a method to use (B) above, i.e., strange words that appear only in the test mails. We compare the number of types of these words in ham and spam mails, and show that the number tends to be larger in spam mails, and that this feature can be used as a bias for detecting spam mails. In this paper, we deal with experiments using bsfilter and develop a method to set spam probabilities uniformly for strange words that appear only in the test mails. After searching for the optimal spam probability, we report that a spam probability of 0.7 greatly improves the filtering performance.
In Chapter 7, we describes the processing flow combining the methods developed in Chapter 5 and Chapter 6. The paper is then summarized, including future prospects.
Creators :
Temma Seiya
Steel truss bridges, which are one of the bridge structures applicable to long spans, are widely used as marine bridges connecting mainland and remote island. Since such steel truss bridges are built on the sea, they are exposed to severe corrosive environment due to the influence of airborne salt. In addition, there are many parts where it is not easy to inspect to detect abnormalities, so it is more difficult to eliminate the risk of member damage in such steel truss bridges than in general bridges. On the other hand, once the marine bridges are built, they become an indispensable facility for the life of the island. Therefore, if there are no other traffic routes to access an island, the sustainability of the marine bridge is an important issue that is directly linked to the sustainability of the remote island life. When member damage occurs in a steel truss bridge, it depends on redundancy, which means the margin for the load-bearing capacity and load-bearing function, whether the damage develops into chain damage or remains limited damage. Bridges with redundancy could be restored by repairing even if the member damages occurred, because they did not develop into chain damages. In some cases, vehicles could pass through with traffic restrictions. Although redundancy is an important performance for maintaining life on remote islands that have no alternative traffic routes, there are few studies on evaluation and improvement methods of redundancy for long steel truss bridges used for marine bridges.
The purpose of this study is to propose a method for improving the redundancy of long steel truss bridges, and three research subjects are set to achieve this purpose. The first study subject is the investigation of the effect of truss joint modeling on redundancy evaluation, and is the subject to appropriately evaluate the redundancy of steel truss bridges. The second is also the subject related to the redundancy evaluation of steel truss bridges, and is the development of dynamic response calculation method that considers the vibration characteristics of steel truss bridges, which are vibration systems with multiple degrees of freedom. The third study subject is a proposal for methods to improve the redundancy of long steel truss bridges. This paper consists of five chapters.
Chapter 1 is an introduction, and describes the background of the research, the setting of the purpose and research subjects, and the previous studies.
Chapter 2 describes the study on the modeling of the truss joint. In the analysis of healthy steel truss bridges with no member damage, the sectional forces can be calculated appropriately even with analysis modeling in which frame elements of truss members are rigidly connected at the truss joints, simply. On the other hand, in the redundancy analysis of steel truss bridges with member damages, it is shown that it is necessary to consider the shape of the gusset plates at the truss joints in analysis modeling.
Chapter 3 describes the study on the calculation method of dynamic response caused by damage of truss members. There are cases where the dynamic response due to member damage is calculated in the same way as a single-degree-of-freedom vibration system. However, this study develops a dynamic response calculation method considering the vibration characteristics of long steel truss bridges by using the eigenvector of steel truss bridges with member damage. A method is proposed to set the magnitude of the eigenvector using balance equation of the work given to the steel truss bridge by the sectional force unloaded from damaged member and the strain energy stored in the steel truss bridge. In addition, a method is proposed to calculate the dynamic response by setting the range of vibration modes using the sum of effective mass ratio, and selecting the eigenvector that has the greatest effect on the dynamic response for each member. It is shown that the proposed calculation method gives redundancy evaluation closer to time-history-response analysis than the method that calculates the dynamic response in the same way as a single-degree-offreedom vibration system.
Chapter 4 describes the study on redundancy improvement for a long steel truss bridge. A combination of the countermeasure against members that trigger chain damage and the countermeasure against members with insufficient load-bearing capacity is planned. Analysis clarifies that the X bracing, which is a reinforcing structure in X shape, is an efficient reinforcement that works against multiple member damage cases as the countermeasure against the trigger member of member chain damage. Also, the load-bearing capacity is verified by a loading test of specimens with reinforced structures. Since the subject bridge has 18 truss panels where X bracing can be installed, the placement patterns were examined by the optimization method. It is clarified that the weight of reinforcing material can be reduced by installing X-braces only at four truss panels in the alternating areas, rather than installing at all 18 truss panels.
Chapter 5 describes the summary of this study and future developments.
Creators :
Tajima Keiji
アミノグリコシド系抗菌薬は有害反応として聴覚障害をきたすことがあり、蝸牛基底回転の外有毛細胞が障害されやすいことが知られている。本研究では、ネオマイシンの有毛細胞障害に対するアスタキサンチンナノ製剤の保護効果を検討した。ネオマイシンを加えたCBA/Nマウスの卵形嚢培養に対し、培養液にアスタキサンチンナノ製剤を投与した群では有毛細胞の減少および酸化ストレスが有意に抑制された。さらに、アスタキサンチンナノ製剤の鼓室内投与を行い音響曝露前後の聴性脳幹反応(ABR)閾値の変化、有毛細胞減少率を評価した。アスタキサンチン投与群では音響曝露後のABR閾値変化、有毛細胞減少率が抑制される傾向が見られた。血液内耳関門の存在により、鼓室内投与に適した薬剤は限られるが、アスタキサンチンナノ製剤の形態は正円窓膜を浸透する可能性があり、内耳障害抑制の効果を有する可能性が示唆された。
Creators :
小林 由貴
The development and implementation of industrial policy are essential in shaping a country’s economic landscape. It promotes industrialization, which, in turn, generates employment opportunities, enhances productivity, and diversifies the economy. the present dissertation studies the subject of industrial policy, with a particular emphasis on resource allocation and computable general equilibrium in Ghana.
Chapter 2 delves into the concept of industrial policy and its implementation in Africa in general, and Ghana, in particular. First, we examine Ghana’s past experience with industrial policy implementation and the reasons for its inability to attain the desired outcomes. Subsequently, in response to the call for a return to industrial policy, we argue in favor of a renewed implementation of industrial policy in Ghana. We posit that the likelihood of success is significantly higher with the benefit of better institutions.
Chapter 3 examines the subjects of firm-level productivity, productivity distribution, and resource allocation. In the first instance, we decompose labor productivity in Ghana and draw the conclusion that within-sector resource allocation primarily drives productivity growth, with structural change playing a limited role. Next, we analyze the gross allocative effect, finding evidence that resources are migrating toward sectors of lower productivity. Finally, we also examine productivity distribution through the lens of the power law distribution, establishing that firms involved in international trade exhibit higher levels of aggregation. Thus, allocating resources to such firms leads to greater productivity, thereby minimizing resource misallocation.
Chapter 4 presents a dynamic recursive computable general equilibrium for Ghana, employing a Social Account Matrix (SAM) with 2015 as the benchmark year, and we conclude this chapter with a brief analysis of SAM. Chapter 5 examines several possible simulation scenarios. We build our simulations around two industrial policy strategies: labor-intensive and capital-intensive, furthermore our simulation is informed by Ghana’s industrial policy plan. We analyze various policies such as efficiency improvement, trade protection, free trade, and taxation policy.
We conclude that capital-intensive industrialization would work better under a free trade policy. Moreover, we discovered that the cost of protecting labor-intensive industries is less than the cost of safeguarding capital-intensive industries. We conclude the dissertation with a discussion of the implications of our findings. we aim to provide a comprehensive discussion of the implications of our findings, as well as acknowledge the limitations of our study and propose potential avenues for further research. By doing so, we hope to contribute to the existing body of knowledge in our field and inspire future researchers to expand upon our work.
Creators :
Borges Jorge Tavares
Since social infrastructure, which was intensively developed during the high economic growth period,will deteriorate all at once in the future, maintenance and management of facilities will be an issue in thefuture. Currently, facility inspection records are based on paperbased forms, and are not premised onautomatic processing by computer. The authors have developed the “Smart Chosa” and realized a databaseof facility inspections and a GIS System. The Smart Chosa was able to record the location of the inspectionphoto on a two dimensional map, but because it was necessary to approach the deformed part when takingthe inspection photo, it was not possible to grasp the position, direction, and size of the entire facility.Therefore, we applied 3D GIS to Smart Chosa for sabo dams, created a 3D model from photographs takenon site, and conducted research to manage inspection results on the 3D model.
This study summarizes the results of research on management of inspection photographs on a 3D GISin order to improve the efficiency of management of inspection photographs of sabo facilities. This thesisconsists of 6 chapters, and the main content of each chapter is as follow.
[Chapter 1: Introduction]
In this chapter, the current status and issues of the maintenance and management of socialinfrastructure in Japan were summarized. Utilization of 3D models for maintenance and management ofcivil engineering facilities, high-precision positioning used for alignment of 3D models, efficientinspection of concrete structures, and existing research trends on iPhone LiDAR applications wereorganized. On that basis, the purpose and points of focus of this research were organized, and the structureand outline of this paper were described.
[Chapter 2: Comparison of 3D models and examination of application to sabo facilities maintenance management system]
In this chapter, three types of models, a BIM/CIM model, a 3D point cloud model, and a 3D surfacemodel, are compared and examined as 3D models to be applied to the maintenance management system.The problem setting in this chapter is the selection of a 3D model to be used in this system. The constraintis that the Sabo dams including the existing dam can be modeled in 3D. At present, there are few BIM/CIMmodels for sabo dams, so when applying to existing sabo dams, we believe that a 3D surface model thatcan be created by SfM/MVS technology from photographs taken by UAVs will be useful. The 3D modelin the research was a 3D surface model.
Knowledge about the 3D model that can be applied to the sabo facility maintenance managementsystem and knowledge for utilizing the 3D model in the 3D GIS were obtained.
[Chapter 3: Performance evaluation of RTK receiver used for Sabo facility investigation support system]
In this chapter, a survey of high-precision positioning technology necessary for positioning 3D modelsof sabo dams and inspection photos on a 3D GIS, and evaluation of positioning performance in sabo damsand surrounding forests are conducted. The problem setting in this chapter is whether or not locationinformation can be acquired during surveys of Sabo facilities, and accuracy verification. The constraintis that real-time high-precision positioning is required using inexpensive and small devices inenvironments that are unfavorable for satellite positioning (such as sabo dams and forests).
It was confirmed that the multi-band receiver whose performance was evaluated has a horizontalvariation of 22 mm (2DRMS) even in a poor environment where about 70% of the sky directly below thesabo dam is covered. It was confirmed that the method can be applied to aligning 3D models ofphotographs.
[Chapter 4: Investigation of image synthesis for creation of sabo dams inspection image]
In this chapter, as a basic examination of 3D model creation, the image synthesis method is organized.The problem setting in this chapter is normalization and image combination necessary for synthesizinginspection photographs (2D). The constraint is that the inspection photography equipment is a smartphonefor field survey. In the feature point detection method for image synthesis, we compared two types offeature amounts, SIFT feature amount and AKAZE feature amount, and confirmed the accuracy byexperiments. In addition, RANSAC was used as an outlier removal method. By combining these methods,we performed image synthesis using multiple photographs of the concrete surface of the sabo dam.
[Chapter 5: 3D model creation by SfM/MVS and application to 3D GIS]
The problem setting in this chapter is a superimposed display of a 3D Sabo dam model and inspectionphotographs. The constraint is that the equipment that can be used to create a 3D model of inspectionphotographs is limited to equipment (compact and lightweight) that can be brought by local workers. Inthis chapter, we first present an overview of the "smart chosa" system that expands the scope of applicationfrom 2D to 3D in this research. After that, we investigated SfM/MVS processing to create a 3D surfacemodel. By creating a 3D model of the sabo dam and a 3D model of inspection photos by SfM/MVSprocessing, and importing them into a 3D GIS, we succeeded in superimposing the sabo dam andinspection photos on a 3D map.
In addition, we examined a method of creating a 3D surface model using the iPhone LiDARapplication that can perform 3D measurement using the LiDAR function installed from iPhone12Pro. Wecompared the 3D model created with the iPhone LiDAR app and the 3D model created using MetaShape,a software that implements SfM/MVS processing, and confirmed the image resolution and positionalaccuracy for use as inspection photographs. In order to incorporate the created 3D model into 3D GISsoftware, we examined a method for matching the orientation and position, and actually superimposedthe 3D model of the sabo dam and the 3D model of the inspection photograph on the 3D GIS. I haveconfirmed that it is possible.
[Chapter 6: Summary]
In this chapter, a summary of the results obtained in Chapters 2 to 5 and future issues were discussed.
The result of this research is a visualization method that makes it easy for people other than fieldinvestigators to understand the situation of the site by importing 3D surface models acquired by variousmethods into 3D GIS. 3D surface models include SfM models created from photos taken with aUAV/smartphone, SfM models created from photos taken with a handheld RTK rover, and 3D modelscreated with the iPhone LiDAR app.
By using this method, it is possible to grasp the deformation position and deformation direction of thesabo dam in 3D space, and by superimposing the photographs of each inspection, it is possible to graspthe change over time.
Creators :
山野 亨
Japan's declining birthrate and aging population are difficult to resolve in the short term, and the working-age population, which refers to the population aged 15 to 65, continues to decline. On the other hand, mental illness patients are increasing in the working age population. The medical side, which is responsible for treatment, is required to improve efficiency by reforming the way doctors work.
Therefore, expectations are placed on psychotherapy performed at home for the purpose of effective treatment of the working-age population. In this study, neurofeedback, which is one of the psychological therapies, was taken up, and applied equipment development and verification were carried out for its implementation at home. In this study, as a preclinical stage, measures were verified based on measurements of general cooperators.
Neurofeedback (NFB) is considered to be one of psychotherapy using electroencephalogram signals, and is a psychotherapy that visualizes one's own electroencephalogram and self-controls the visualized electroencephalogram. It is attracting attention because it is a non-drug therapy and provides neuromodulation. NFB is being investigated for many clinical applications. The target diseases are diverse, including chronic pain, ADHD, depression, and mood disorders. However, we believe that there are four tasks to ensure the effectiveness of this therapy.
Task 1 is overcoming the difficulty of installing electroencephalogram electrodes. NFB is considered to be a therapy that affects the plasticity of the cranial nerves, and is a therapy that actively promotes the development of neural networks, and is expected to be highly effective if the training frequency is high. It is necessary to be able to perform it at home, and it is required that electroencephalogram electrodes can be easily installed. Therefore, we made a prototype of an electroencephalogram headset with bipolar gel electrodes, and as a result of trial verification with children, we were able to confirm the electroencephalogram signals of 30 people aged 5 to 20 years old. Analysis of the recorded electroencephalogram revealed an age-dependent left-brain tendency in β waves, etc., confirming consistency with previous findings.
Task 2 is determining the brain wave derivation part of the NFB training target. For electroencephalography, lead electrodes are usually placed in the scalp, but it is difficult to place the electrodes in the scalp by yourself. There is a need to consider forehead derivation for easy EEG electrode placement at home. There are regional differences in EEG waveforms within the forehead, and it is necessary to select the most appropriate extraction site. For NFB, we explored the optimal site based on the correlation with the top of the head, which is usually the electroencephalogram derivation position. Next, we performed an EEG network analysis at the time of NFB using the EEG derived from the top of the head and the EEG derived from the optimal forehead region, and analyzed the difference in the brain network during NFB due to the difference in the derivation region. For the second task, we explored the optimal site for deriving brain waves from the forehead, and proved that NFB from the brain waves derived from this site works on the same network as NFB from the brain waves derived from the top of the head.
Task 3 is a method of selecting an electroencephalogram frequency band to be derived and self-controlled in NFB therapy (training target electroencephalogram frequency band). In previous studies, the EEG frequencies targeted for NFB therapy are diverse and not standardized. Even for the same disease, various electroencephalogram frequency bands are selected and NFB is performed. A personalized frequency band decision is made according to the patient's pathology and condition. In order to make the frequency band determination method more logical, we thought it necessary to determine the electroencephalogram frequency for therapy from the comparison of the basic rhythms of healthy subjects and patients.
In this study, we created an electroencephalogram basic rhythm evaluation program and collected electroencephalogram basic rhythm data from randomly selected subjects. The electroencephalogram basic rhythm evaluation program consists of 7 stages. Eyes open stage, Eyes closed stage, 0Back stage, Rest1 stage, 2Back stage, Rest2 stage, Healing Picture stage. Changes in brain waves occur due to external stimuli such as eye opening and eye closing, concentration, and relaxation. An electroencephalogram basic rhythm evaluation program was created considering multiple stimuli that affect electroencephalogram dynamics. The usefulness of this program was confirmed as a preliminary examination of the dominant fluctuation region and network analysis by topographic analysis during the execution of the electroencephalogram basic rhythm evaluation program. EEG basic rhythm brain standard program electroencephalogram measurements were carried out for 89 subjects recruited from the general public, and a database was created. Using the forehead optimally measured parts (left and right) obtained in Task 2 as electroencephalogram derivation parts, a significant difference test was performed for each electroencephalogram frequency band Power value and content rate of each stage of the electroencephalogram basic rhythm evaluation program. The α Power value increased 2.52 times when the eyes were closed, and the θ Power value increased 1.67 times during 2 Back compared to 0 Back. We examined the possibility of clinical application by analyzing the correlation between the score of the questionnaire used in clinical diagnosis and the electroencephalogram component.
The questionnaires used were mainly CSI (CENTRAL SENSITIZATION INVENTORY), and POMS2 (Profile of Mood States 2).
Task 4 is NFB scoring. Continuation of psychotherapy requires a score as a reward to be visualized. We compared the two scores, the time ratio score and the amplitude ratio score, analyzed the correlation between the questionnaire used in task 3 and the two scores, and examined the optimal score. In results, the frequency band that correlates better with psychological activity during NFB was suggested SMR.
Some of the psychological scales included the data probably above the general average level, which might have provided hypotheses at the preclinical stage. 4 tasks were conducted to demonstrate the technical requirements and effectiveness evaluation for the practical application of cognitive psychological training NFB, which is expected to be used with high frequency at home for children to working-age adults. The technological requirements and effectiveness evaluation for the practical application of NFB are presented.
This research attempted four tasks and realized the possibility of frequent NFB training at home for patients from children to productive age. As a preclinical stage, it was a study within a range that can be resolved as a stage of policy verification based on the general participant study. In the future, the efficacy of this study will be further evaluated by comparing it with clinical data in the area of mental illness such as depression and developmental disorders, including chronic pain.
In the future, the effectiveness of this study will be further evaluated by comparison with clinical data in the area of chronic pain and other mental illnesses such as depression and developmental disorders.
Creators :
Oda Kazuyuki
Open source software (OSS) are adopted as embedded systems, server usage, and so on because of quick delivery, cost reduction, and standardization of systems. Therefore, OSS is often used not only for the personal use but also for the commercial use. Many OSS have been developed under the peculiar development style known as the bazaar method. According to this method, many faults are detected and fixed by developers around the world, and the fixed result will be reflected in the next release. Also, many OSS are developed and managed by using the fault big data recorded on the bug tracking systems. Then, many OSS are developed and maintained by several developers with many OSS users.
According to the results of the 2022 Open Source Security and Risk Analysis (OSSRA), OSS is an essential part of proprietary software, e.g., the source code containing OSS is 97%, all source code using OSS is 78%. On the other hand, OSS has issues from various perspectives. Therefore, OSS users need to decide on whether they should use OSS with consideration of each issue. In addition, the managers of open source projects need to manage their projects appropriately because OSS has a large impact on software around the world.
This thesis focuses on the following three issues among many ones. We examine a method for OSS users and open source project managers to evaluate the stability of open source projects.
1. Selection evaluation and licensing: Methods for OSS users to make selections from the many OSS available situation,
2. Vulnerability support: Predicted fault fix priority for the reported OSS,
3. Maintenance and quality assurance: Prediction of appropriate OSS version upgrade timing, considering the development effort required after OSS upgrade by OSS users.
In “1. Selection evaluation and licensing,” we attempt to derive the OSS-oriented EVM by applying the earned value management (EVM) to several open source projects. The EVM is one of the project management methodologies for measuring the project performance and progress. In order to derive the OSS-oriented EVM, we apply the stochastic models based on software reliability growth model (SRGM) considering the uncertainty for the development environment in open source projects. We also improve the method of deriving effort in open source projects. In case of applying the existing method of deriving effort in open source projects, it is not possible to derive some indices in the OSS-oriented EVM. Thus, we resolve this issue. The derived OSSoriented EVM helps OSS users and open source project managers to evaluate the stability of their current projects. It is an important to use the decision-making tool regarding their decisions and projects of OSS. From a different perspective, we also evaluate the stability of the project in terms of the speed of fault fixing by predicting the time transition of fixing the OSS faults reported in the future.
2. In “Vulnerability support”, in terms of open source project managers, we create metrics to detect faults with a high fix priority and predicted a long time for fixing. In addition, we try to improve the detection accuracy of the proposed metrics by learning not only the specific version but also the bug report data of the past version by using the random forest considering the characteristic similarities of bugs fix among different versions. This allows the project managers to identify the faults that should be prioritized for fixing when a large number of faults are reported, and facilitates project operations.
In “3. Maintenance and quality assurance”, as an optimum maintenance problem, we predict the appropriate OSS version-up timing considering the maintenance effort required by OSS
users after upgrading the OSS. It is dangerous in terms of the vulnerability to continue using the specified version of OSS ignoring the End of Life. Therefore, we should upgrade the version periodically. However, the maintenance cost increase with the version upgrade frequently. Then, we find the optimum maintenance time by minimizing the total expected software maintenance effort in terms of OSS users. In particular, we attempt to reflect the progress of open source projects by using the OSS-oriented EVM in deriving the optimal maintenance time.
In conclusion, we found that there is the applicability as the stability evaluation of open source projects from three perspectives. Particularly, the OSS-oriented EVM discussed in “1. Selection evaluation and licensing” can contribute to the visualization of maintenance effort in open source projects. The proposed method will potentially contribute to the development of OSS in the future.
Creators :
Sone Hironobu
Hyperspectral (HS) imaging can capture the detailed spectral signature of each spatial location of a scene and leads to better understanding of different material characteristics than traditional imaging systems. However, existing HS sensors can only provide low spatial resolution images at a video rate in practice. Thus reconstructing high-resolution HS (HR-HS) image via fusing a low-resolution HS (LR-HS) image and a high-resolution RGB (HR-RGB) image with image processing and machine learning technique, called as hyperspectral image super resolution (HSI SR), has attracted a lot of attention. Existing methods for HSI SR are mainly categorized into two research directions: mathematical model based method and deep learning based method. Mathematical model based methods generally formulate the degradation procedure of the observed LR-HS and HR-RGB images with a mathematical model and employ an optimization strategy for solving. Due to the ill-posed essence of the fusion problem, most works leverage the hand-crafted prior to model the underlying structure of the latent HR-HS image, and pursue a more robust solution of the HR-HS image. Recently, deep learning-based approaches have evolved for HS image reconstruction, and current efforts mainly concentrated on designing more complicated and deeper network architectures to pursue better performance. Although impressive reconstruction results can be achieved compared with the mathematical model based methods, the existing deep learning methods have the following three limitations. 1) They are usually implemented in a fully supervised manner, and require a large-scale external dataset including the degraded observations: the LR-HS/HR-RGB images and their corresponding HR-HS ground-truth image, which are difficult to be collected especially in the HSI SR task. 2) They aim to learn a common model from training triplets, and are undoubtedly insufficient to model abundant image priors for various HR-HS images with rich contents, where the spatial structures and spectral characteristics have considerable difference. 3) They generally assume that the spatial and spectral degradation procedures for capturing the LR-HS and HR-RGB images are fixed and known, and then synthesize the training triplets to learn the reconstruction model, which would produce very poor recovering performance for the observations with different degradation procedures. To overcome the above limitations, our research focuses on proposing the unsupervised learning-based framework for HSI SR to learn the specific prior of an under-studying scene without any external dataset. To deal with the observed images captured under different degradation procedures, we further automatically learn the spatial blurring kernel and the camera spectral response function (CSF) related to the specific observations, and incorporate them with the above unsupervised framework to build a high-generalized blind unsupervised HSI SR paradigm.
Moreover, Motivated by the fact that the cross-scale pattern recurrence in the natural images may frequently exist, we synthesize the pseudo training triplets from the degraded versions of the LR-HS and HR-RGB observations and themself, and conduct supervised and unsupervised internal learning to obtain a specific model for the HSI SR, dubbed as generalized internal learning. Overall, the main contributions of this dissertation are three-fold and summarized as follows:
1. A deep unsupervised fusion-learning framework for HSI SR is proposed. Inspired by the insights that the convolution neural networks themself possess large amounts of image low-level statistics (priors) and can more easy to generate the image with regular spatial structure and spectral pattern than noisy data, this study proposes an unsupervised framework to automatically generating the target HS image with the LR-HS and HR-RGB observations only without any external training database. Specifically, we explore two paradigms for the HS image generation: 1) learn the HR-HS target using a randomly sampled noise as the input of the generative network from data generation view; 2) reconstructing the target using the
fused context of the LR-HS and HR-RGB observations as the input of the generative network from a self-supervised learning view. Both paradigms can automatically model the specific priors of the under-studying scene by optimizing the parameters of the generative network instead of the raw HR-HS target. Concretely, we employ an encoder-decoder architecture to configure our generative network, and generate the target HR-HS image from the noise or the fused context input. We assume that the spatial and spectral degradation procedures for the under-studying LR-HS and HR-RGB observation are known, and then can produce the approximated version of the observations by degrading the generated HR-HS image, which can intuitively used to obtain the reconstruction errors of the observation as the loss function for network training. Our unsupervised learning framework can not only model the specific prior of the under-studying scene to reconstruct a plausible HR-HS estimation without any external dataset but also be easy to be adapted to the observations captured under various imaging conditions, which can be naively realized by changing the degradation operations in our framework.
2. A novel blind learning method for unsupervised HSI SR is proposed. As described in the above deep unsupervised framework for HSI SR that the spatial and spectral degradation procedures are required to be known. However, different optical designs of the HS imaging devices and the RGB camera would cause various degradation processes such as the spatial blurring kernels for capturing LRHS images and the camera spectral response functions (CSF) in the RGB sensors, and it is difficult to get the detailed knowledge for general users. Moreover, the concrete computation in the degradation procedures would be further distorted under various imaging conditions. Then, in real applications, it is hard to have the known degradation knowledge for each under-studying scene. To handle the above issue, this study exploits a novel parallel blind unsupervised approach by automatically and jointly learning the degradation parameters and the generative network. Specifically, according to the unknown components, we propose three ways to solve different problems: 1) a spatial-blind method to automatically learn the spatial blurring kernel in the capture of the LR-HS observation with the known CSF of the RGB sensor; 2) a spectral-blind method to automatically learn the CSF transformation matrix in the capture of the HR-RGB observation with known burring kernel in the HS imaging device; 3) a complete-blind method to simultaneously learn both spatial blurring kernel and CSF matrix. Based on our previously proposed unsupervised framework, we particularly design the special convolution layers for parallelly realizing the spatial and spectral degradation procedures, where the layer parameters are treated as the weights of the blurring kernel and the CSF matrix for being automatically learned. The spatial degradation procedure is implemented by a depthwise convolution layer, where the kernels for different spectral channel are imposed as the same and the stride parameter is set as the expanding scale factor, while the spectral degradation procedure is achieved with a pointwise convolution layer with the output channel 3 to produce the approximated HR-RGB image. With the learnable implementation of the degradation procedure, we construct an end-toend framework to jointly learn the specific prior of the target HR-HS images and the degradation knowledge, and build a high-generalized HSI SR system. Moreover, the proposed framework can be unified for realizing different versions of blind HSI SR by fixing the parameters of the implemented convolution as the known blurring kernel or the CSF, and is highly adapted to arbitrary observation for HSI SR.
3. A generalized internal learning method for HSI SR is proposed. Motivated by the fact that natural images have strong internal data repetition and the crossscale internal recurrence, we further synthesize labeled training triplets using the LR-HS and HR-RGB observation only, and incorporate them with the un-labeled observation as the training data to conduct both supervised and unsupervised learning for constructing a more robust image-specific CNN model of the under-studying HR-HS data. Specifically, we downsample the observed LR-HS and HR-RGB image to their son versions, and produce the training triplets with the LR-HS/HR-RGB sons and the LR-HS observation, where the relation among them would be same as among the LR-HS/HR-RGB observations and the HR-HS target despite of the difference in resolutions. With the synthesized training samples, it is possible to train a image-specific CNN model to achieve the HR-HS target with the observation as input, dubbed as internal learning. However, the synthesized labeled training samples usually have small amounts especially for a large spatial expanding factor, and the further down-sampling on the LR-HS observation would bring severe spectral mixing of the surrounding pixels causing the deviation of the spectral mixing levels at the training phase and test phase. Therefore, these limitations possibly degrade the super-resolved performance with the naive internal learning. To mitigate the above limitations, we incorporate the naive internal learning with our selfsupervised learning method for unsupervised HSI SR, and present a generalized internal learning method to achieve more robust HR-HS image reconstruction.
Creators :
LIU ZHE