コンテンツメニュー
Update Date (<span class="translation_missing" title="translation missing: en.view.desc">Desc</span>)
Creators : Srivastava Pratibha Updated At : 2021-12-07 00:34:44
Creators : Md. Istiaq Obaidi Tanvir Updated At : 2021-12-07 00:34:47
The development and implementation of industrial policy are essential in shaping a country’s economic landscape. It promotes industrialization, which, in turn, generates employment opportunities, enhances productivity, and diversifies the economy. the present dissertation studies the subject of industrial policy, with a particular emphasis on resource allocation and computable general equilibrium in Ghana. Chapter 2 delves into the concept of industrial policy and its implementation in Africa in general, and Ghana, in particular. First, we examine Ghana’s past experience with industrial policy implementation and the reasons for its inability to attain the desired outcomes. Subsequently, in response to the call for a return to industrial policy, we argue in favor of a renewed implementation of industrial policy in Ghana. We posit that the likelihood of success is significantly higher with the benefit of better institutions. Chapter 3 examines the subjects of firm-level productivity, productivity distribution, and resource allocation. In the first instance, we decompose labor productivity in Ghana and draw the conclusion that within-sector resource allocation primarily drives productivity growth, with structural change playing a limited role. Next, we analyze the gross allocative effect, finding evidence that resources are migrating toward sectors of lower productivity. Finally, we also examine productivity distribution through the lens of the power law distribution, establishing that firms involved in international trade exhibit higher levels of aggregation. Thus, allocating resources to such firms leads to greater productivity, thereby minimizing resource misallocation. Chapter 4 presents a dynamic recursive computable general equilibrium for Ghana, employing a Social Account Matrix (SAM) with 2015 as the benchmark year, and we conclude this chapter with a brief analysis of SAM. Chapter 5 examines several possible simulation scenarios. We build our simulations around two industrial policy strategies: labor-intensive and capital-intensive, furthermore our simulation is informed by Ghana’s industrial policy plan. We analyze various policies such as efficiency improvement, trade protection, free trade, and taxation policy. We conclude that capital-intensive industrialization would work better under a free trade policy. Moreover, we discovered that the cost of protecting labor-intensive industries is less than the cost of safeguarding capital-intensive industries. We conclude the dissertation with a discussion of the implications of our findings. we aim to provide a comprehensive discussion of the implications of our findings, as well as acknowledge the limitations of our study and propose potential avenues for further research. By doing so, we hope to contribute to the existing body of knowledge in our field and inspire future researchers to expand upon our work.
Creators : Borges Jorge Tavares Updated At : 2023-12-13 10:41:44
Since social infrastructure, which was intensively developed during the high economic growth period,will deteriorate all at once in the future, maintenance and management of facilities will be an issue in thefuture. Currently, facility inspection records are based on paperbased forms, and are not premised onautomatic processing by computer. The authors have developed the “Smart Chosa” and realized a databaseof facility inspections and a GIS System. The Smart Chosa was able to record the location of the inspectionphoto on a two dimensional map, but because it was necessary to approach the deformed part when takingthe inspection photo, it was not possible to grasp the position, direction, and size of the entire facility.Therefore, we applied 3D GIS to Smart Chosa for sabo dams, created a 3D model from photographs takenon site, and conducted research to manage inspection results on the 3D model. This study summarizes the results of research on management of inspection photographs on a 3D GISin order to improve the efficiency of management of inspection photographs of sabo facilities. This thesisconsists of 6 chapters, and the main content of each chapter is as follow. [Chapter 1: Introduction] In this chapter, the current status and issues of the maintenance and management of socialinfrastructure in Japan were summarized. Utilization of 3D models for maintenance and management ofcivil engineering facilities, high-precision positioning used for alignment of 3D models, efficientinspection of concrete structures, and existing research trends on iPhone LiDAR applications wereorganized. On that basis, the purpose and points of focus of this research were organized, and the structureand outline of this paper were described. [Chapter 2: Comparison of 3D models and examination of application to sabo facilities maintenance management system] In this chapter, three types of models, a BIM/CIM model, a 3D point cloud model, and a 3D surfacemodel, are compared and examined as 3D models to be applied to the maintenance management system.The problem setting in this chapter is the selection of a 3D model to be used in this system. The constraintis that the Sabo dams including the existing dam can be modeled in 3D. At present, there are few BIM/CIMmodels for sabo dams, so when applying to existing sabo dams, we believe that a 3D surface model thatcan be created by SfM/MVS technology from photographs taken by UAVs will be useful. The 3D modelin the research was a 3D surface model. Knowledge about the 3D model that can be applied to the sabo facility maintenance managementsystem and knowledge for utilizing the 3D model in the 3D GIS were obtained. [Chapter 3: Performance evaluation of RTK receiver used for Sabo facility investigation support system] In this chapter, a survey of high-precision positioning technology necessary for positioning 3D modelsof sabo dams and inspection photos on a 3D GIS, and evaluation of positioning performance in sabo damsand surrounding forests are conducted. The problem setting in this chapter is whether or not locationinformation can be acquired during surveys of Sabo facilities, and accuracy verification. The constraintis that real-time high-precision positioning is required using inexpensive and small devices inenvironments that are unfavorable for satellite positioning (such as sabo dams and forests). It was confirmed that the multi-band receiver whose performance was evaluated has a horizontalvariation of 22 mm (2DRMS) even in a poor environment where about 70% of the sky directly below thesabo dam is covered. It was confirmed that the method can be applied to aligning 3D models ofphotographs. [Chapter 4: Investigation of image synthesis for creation of sabo dams inspection image] In this chapter, as a basic examination of 3D model creation, the image synthesis method is organized.The problem setting in this chapter is normalization and image combination necessary for synthesizinginspection photographs (2D). The constraint is that the inspection photography equipment is a smartphonefor field survey. In the feature point detection method for image synthesis, we compared two types offeature amounts, SIFT feature amount and AKAZE feature amount, and confirmed the accuracy byexperiments. In addition, RANSAC was used as an outlier removal method. By combining these methods,we performed image synthesis using multiple photographs of the concrete surface of the sabo dam. [Chapter 5: 3D model creation by SfM/MVS and application to 3D GIS] The problem setting in this chapter is a superimposed display of a 3D Sabo dam model and inspectionphotographs. The constraint is that the equipment that can be used to create a 3D model of inspectionphotographs is limited to equipment (compact and lightweight) that can be brought by local workers. Inthis chapter, we first present an overview of the "smart chosa" system that expands the scope of applicationfrom 2D to 3D in this research. After that, we investigated SfM/MVS processing to create a 3D surfacemodel. By creating a 3D model of the sabo dam and a 3D model of inspection photos by SfM/MVSprocessing, and importing them into a 3D GIS, we succeeded in superimposing the sabo dam andinspection photos on a 3D map. In addition, we examined a method of creating a 3D surface model using the iPhone LiDARapplication that can perform 3D measurement using the LiDAR function installed from iPhone12Pro. Wecompared the 3D model created with the iPhone LiDAR app and the 3D model created using MetaShape,a software that implements SfM/MVS processing, and confirmed the image resolution and positionalaccuracy for use as inspection photographs. In order to incorporate the created 3D model into 3D GISsoftware, we examined a method for matching the orientation and position, and actually superimposedthe 3D model of the sabo dam and the 3D model of the inspection photograph on the 3D GIS. I haveconfirmed that it is possible. [Chapter 6: Summary] In this chapter, a summary of the results obtained in Chapters 2 to 5 and future issues were discussed. The result of this research is a visualization method that makes it easy for people other than fieldinvestigators to understand the situation of the site by importing 3D surface models acquired by variousmethods into 3D GIS. 3D surface models include SfM models created from photos taken with aUAV/smartphone, SfM models created from photos taken with a handheld RTK rover, and 3D modelscreated with the iPhone LiDAR app. By using this method, it is possible to grasp the deformation position and deformation direction of thesabo dam in 3D space, and by superimposing the photographs of each inspection, it is possible to graspthe change over time.
Creators : 山野 亨 Updated At : 2023-12-13 09:50:26
Japan's declining birthrate and aging population are difficult to resolve in the short term, and the working-age population, which refers to the population aged 15 to 65, continues to decline. On the other hand, mental illness patients are increasing in the working age population. The medical side, which is responsible for treatment, is required to improve efficiency by reforming the way doctors work. Therefore, expectations are placed on psychotherapy performed at home for the purpose of effective treatment of the working-age population. In this study, neurofeedback, which is one of the psychological therapies, was taken up, and applied equipment development and verification were carried out for its implementation at home. In this study, as a preclinical stage, measures were verified based on measurements of general cooperators. Neurofeedback (NFB) is considered to be one of psychotherapy using electroencephalogram signals, and is a psychotherapy that visualizes one's own electroencephalogram and self-controls the visualized electroencephalogram. It is attracting attention because it is a non-drug therapy and provides neuromodulation. NFB is being investigated for many clinical applications. The target diseases are diverse, including chronic pain, ADHD, depression, and mood disorders. However, we believe that there are four tasks to ensure the effectiveness of this therapy. Task 1 is overcoming the difficulty of installing electroencephalogram electrodes. NFB is considered to be a therapy that affects the plasticity of the cranial nerves, and is a therapy that actively promotes the development of neural networks, and is expected to be highly effective if the training frequency is high. It is necessary to be able to perform it at home, and it is required that electroencephalogram electrodes can be easily installed. Therefore, we made a prototype of an electroencephalogram headset with bipolar gel electrodes, and as a result of trial verification with children, we were able to confirm the electroencephalogram signals of 30 people aged 5 to 20 years old. Analysis of the recorded electroencephalogram revealed an age-dependent left-brain tendency in β waves, etc., confirming consistency with previous findings. Task 2 is determining the brain wave derivation part of the NFB training target. For electroencephalography, lead electrodes are usually placed in the scalp, but it is difficult to place the electrodes in the scalp by yourself. There is a need to consider forehead derivation for easy EEG electrode placement at home. There are regional differences in EEG waveforms within the forehead, and it is necessary to select the most appropriate extraction site. For NFB, we explored the optimal site based on the correlation with the top of the head, which is usually the electroencephalogram derivation position. Next, we performed an EEG network analysis at the time of NFB using the EEG derived from the top of the head and the EEG derived from the optimal forehead region, and analyzed the difference in the brain network during NFB due to the difference in the derivation region. For the second task, we explored the optimal site for deriving brain waves from the forehead, and proved that NFB from the brain waves derived from this site works on the same network as NFB from the brain waves derived from the top of the head. Task 3 is a method of selecting an electroencephalogram frequency band to be derived and self-controlled in NFB therapy (training target electroencephalogram frequency band). In previous studies, the EEG frequencies targeted for NFB therapy are diverse and not standardized. Even for the same disease, various electroencephalogram frequency bands are selected and NFB is performed. A personalized frequency band decision is made according to the patient's pathology and condition. In order to make the frequency band determination method more logical, we thought it necessary to determine the electroencephalogram frequency for therapy from the comparison of the basic rhythms of healthy subjects and patients. In this study, we created an electroencephalogram basic rhythm evaluation program and collected electroencephalogram basic rhythm data from randomly selected subjects. The electroencephalogram basic rhythm evaluation program consists of 7 stages. Eyes open stage, Eyes closed stage, 0Back stage, Rest1 stage, 2Back stage, Rest2 stage, Healing Picture stage. Changes in brain waves occur due to external stimuli such as eye opening and eye closing, concentration, and relaxation. An electroencephalogram basic rhythm evaluation program was created considering multiple stimuli that affect electroencephalogram dynamics. The usefulness of this program was confirmed as a preliminary examination of the dominant fluctuation region and network analysis by topographic analysis during the execution of the electroencephalogram basic rhythm evaluation program. EEG basic rhythm brain standard program electroencephalogram measurements were carried out for 89 subjects recruited from the general public, and a database was created. Using the forehead optimally measured parts (left and right) obtained in Task 2 as electroencephalogram derivation parts, a significant difference test was performed for each electroencephalogram frequency band Power value and content rate of each stage of the electroencephalogram basic rhythm evaluation program. The α Power value increased 2.52 times when the eyes were closed, and the θ Power value increased 1.67 times during 2 Back compared to 0 Back. We examined the possibility of clinical application by analyzing the correlation between the score of the questionnaire used in clinical diagnosis and the electroencephalogram component. The questionnaires used were mainly CSI (CENTRAL SENSITIZATION INVENTORY), and POMS2 (Profile of Mood States 2). Task 4 is NFB scoring. Continuation of psychotherapy requires a score as a reward to be visualized. We compared the two scores, the time ratio score and the amplitude ratio score, analyzed the correlation between the questionnaire used in task 3 and the two scores, and examined the optimal score. In results, the frequency band that correlates better with psychological activity during NFB was suggested SMR. Some of the psychological scales included the data probably above the general average level, which might have provided hypotheses at the preclinical stage. 4 tasks were conducted to demonstrate the technical requirements and effectiveness evaluation for the practical application of cognitive psychological training NFB, which is expected to be used with high frequency at home for children to working-age adults. The technological requirements and effectiveness evaluation for the practical application of NFB are presented. This research attempted four tasks and realized the possibility of frequent NFB training at home for patients from children to productive age. As a preclinical stage, it was a study within a range that can be resolved as a stage of policy verification based on the general participant study. In the future, the efficacy of this study will be further evaluated by comparing it with clinical data in the area of mental illness such as depression and developmental disorders, including chronic pain. In the future, the effectiveness of this study will be further evaluated by comparison with clinical data in the area of chronic pain and other mental illnesses such as depression and developmental disorders.
Creators : Oda Kazuyuki Updated At : 2023-12-13 09:39:52
Open source software (OSS) are adopted as embedded systems, server usage, and so on because of quick delivery, cost reduction, and standardization of systems. Therefore, OSS is often used not only for the personal use but also for the commercial use. Many OSS have been developed under the peculiar development style known as the bazaar method. According to this method, many faults are detected and fixed by developers around the world, and the fixed result will be reflected in the next release. Also, many OSS are developed and managed by using the fault big data recorded on the bug tracking systems. Then, many OSS are developed and maintained by several developers with many OSS users. According to the results of the 2022 Open Source Security and Risk Analysis (OSSRA), OSS is an essential part of proprietary software, e.g., the source code containing OSS is 97%, all source code using OSS is 78%. On the other hand, OSS has issues from various perspectives. Therefore, OSS users need to decide on whether they should use OSS with consideration of each issue. In addition, the managers of open source projects need to manage their projects appropriately because OSS has a large impact on software around the world. This thesis focuses on the following three issues among many ones. We examine a method for OSS users and open source project managers to evaluate the stability of open source projects. 1. Selection evaluation and licensing: Methods for OSS users to make selections from the many OSS available situation, 2. Vulnerability support: Predicted fault fix priority for the reported OSS, 3. Maintenance and quality assurance: Prediction of appropriate OSS version upgrade timing, considering the development effort required after OSS upgrade by OSS users. In “1. Selection evaluation and licensing,” we attempt to derive the OSS-oriented EVM by applying the earned value management (EVM) to several open source projects. The EVM is one of the project management methodologies for measuring the project performance and progress. In order to derive the OSS-oriented EVM, we apply the stochastic models based on software reliability growth model (SRGM) considering the uncertainty for the development environment in open source projects. We also improve the method of deriving effort in open source projects. In case of applying the existing method of deriving effort in open source projects, it is not possible to derive some indices in the OSS-oriented EVM. Thus, we resolve this issue. The derived OSSoriented EVM helps OSS users and open source project managers to evaluate the stability of their current projects. It is an important to use the decision-making tool regarding their decisions and projects of OSS. From a different perspective, we also evaluate the stability of the project in terms of the speed of fault fixing by predicting the time transition of fixing the OSS faults reported in the future. 2. In “Vulnerability support”, in terms of open source project managers, we create metrics to detect faults with a high fix priority and predicted a long time for fixing. In addition, we try to improve the detection accuracy of the proposed metrics by learning not only the specific version but also the bug report data of the past version by using the random forest considering the characteristic similarities of bugs fix among different versions. This allows the project managers to identify the faults that should be prioritized for fixing when a large number of faults are reported, and facilitates project operations. In “3. Maintenance and quality assurance”, as an optimum maintenance problem, we predict the appropriate OSS version-up timing considering the maintenance effort required by OSS users after upgrading the OSS. It is dangerous in terms of the vulnerability to continue using the specified version of OSS ignoring the End of Life. Therefore, we should upgrade the version periodically. However, the maintenance cost increase with the version upgrade frequently. Then, we find the optimum maintenance time by minimizing the total expected software maintenance effort in terms of OSS users. In particular, we attempt to reflect the progress of open source projects by using the OSS-oriented EVM in deriving the optimal maintenance time. In conclusion, we found that there is the applicability as the stability evaluation of open source projects from three perspectives. Particularly, the OSS-oriented EVM discussed in “1. Selection evaluation and licensing” can contribute to the visualization of maintenance effort in open source projects. The proposed method will potentially contribute to the development of OSS in the future.
Creators : Sone Hironobu Updated At : 2023-12-12 17:20:31
Hyperspectral (HS) imaging can capture the detailed spectral signature of each spatial location of a scene and leads to better understanding of different material characteristics than traditional imaging systems. However, existing HS sensors can only provide low spatial resolution images at a video rate in practice. Thus reconstructing high-resolution HS (HR-HS) image via fusing a low-resolution HS (LR-HS) image and a high-resolution RGB (HR-RGB) image with image processing and machine learning technique, called as hyperspectral image super resolution (HSI SR), has attracted a lot of attention. Existing methods for HSI SR are mainly categorized into two research directions: mathematical model based method and deep learning based method. Mathematical model based methods generally formulate the degradation procedure of the observed LR-HS and HR-RGB images with a mathematical model and employ an optimization strategy for solving. Due to the ill-posed essence of the fusion problem, most works leverage the hand-crafted prior to model the underlying structure of the latent HR-HS image, and pursue a more robust solution of the HR-HS image. Recently, deep learning-based approaches have evolved for HS image reconstruction, and current efforts mainly concentrated on designing more complicated and deeper network architectures to pursue better performance. Although impressive reconstruction results can be achieved compared with the mathematical model based methods, the existing deep learning methods have the following three limitations. 1) They are usually implemented in a fully supervised manner, and require a large-scale external dataset including the degraded observations: the LR-HS/HR-RGB images and their corresponding HR-HS ground-truth image, which are difficult to be collected especially in the HSI SR task. 2) They aim to learn a common model from training triplets, and are undoubtedly insufficient to model abundant image priors for various HR-HS images with rich contents, where the spatial structures and spectral characteristics have considerable difference. 3) They generally assume that the spatial and spectral degradation procedures for capturing the LR-HS and HR-RGB images are fixed and known, and then synthesize the training triplets to learn the reconstruction model, which would produce very poor recovering performance for the observations with different degradation procedures. To overcome the above limitations, our research focuses on proposing the unsupervised learning-based framework for HSI SR to learn the specific prior of an under-studying scene without any external dataset. To deal with the observed images captured under different degradation procedures, we further automatically learn the spatial blurring kernel and the camera spectral response function (CSF) related to the specific observations, and incorporate them with the above unsupervised framework to build a high-generalized blind unsupervised HSI SR paradigm. Moreover, Motivated by the fact that the cross-scale pattern recurrence in the natural images may frequently exist, we synthesize the pseudo training triplets from the degraded versions of the LR-HS and HR-RGB observations and themself, and conduct supervised and unsupervised internal learning to obtain a specific model for the HSI SR, dubbed as generalized internal learning. Overall, the main contributions of this dissertation are three-fold and summarized as follows: 1. A deep unsupervised fusion-learning framework for HSI SR is proposed. Inspired by the insights that the convolution neural networks themself possess large amounts of image low-level statistics (priors) and can more easy to generate the image with regular spatial structure and spectral pattern than noisy data, this study proposes an unsupervised framework to automatically generating the target HS image with the LR-HS and HR-RGB observations only without any external training database. Specifically, we explore two paradigms for the HS image generation: 1) learn the HR-HS target using a randomly sampled noise as the input of the generative network from data generation view; 2) reconstructing the target using the fused context of the LR-HS and HR-RGB observations as the input of the generative network from a self-supervised learning view. Both paradigms can automatically model the specific priors of the under-studying scene by optimizing the parameters of the generative network instead of the raw HR-HS target. Concretely, we employ an encoder-decoder architecture to configure our generative network, and generate the target HR-HS image from the noise or the fused context input. We assume that the spatial and spectral degradation procedures for the under-studying LR-HS and HR-RGB observation are known, and then can produce the approximated version of the observations by degrading the generated HR-HS image, which can intuitively used to obtain the reconstruction errors of the observation as the loss function for network training. Our unsupervised learning framework can not only model the specific prior of the under-studying scene to reconstruct a plausible HR-HS estimation without any external dataset but also be easy to be adapted to the observations captured under various imaging conditions, which can be naively realized by changing the degradation operations in our framework. 2. A novel blind learning method for unsupervised HSI SR is proposed. As described in the above deep unsupervised framework for HSI SR that the spatial and spectral degradation procedures are required to be known. However, different optical designs of the HS imaging devices and the RGB camera would cause various degradation processes such as the spatial blurring kernels for capturing LRHS images and the camera spectral response functions (CSF) in the RGB sensors, and it is difficult to get the detailed knowledge for general users. Moreover, the concrete computation in the degradation procedures would be further distorted under various imaging conditions. Then, in real applications, it is hard to have the known degradation knowledge for each under-studying scene. To handle the above issue, this study exploits a novel parallel blind unsupervised approach by automatically and jointly learning the degradation parameters and the generative network. Specifically, according to the unknown components, we propose three ways to solve different problems: 1) a spatial-blind method to automatically learn the spatial blurring kernel in the capture of the LR-HS observation with the known CSF of the RGB sensor; 2) a spectral-blind method to automatically learn the CSF transformation matrix in the capture of the HR-RGB observation with known burring kernel in the HS imaging device; 3) a complete-blind method to simultaneously learn both spatial blurring kernel and CSF matrix. Based on our previously proposed unsupervised framework, we particularly design the special convolution layers for parallelly realizing the spatial and spectral degradation procedures, where the layer parameters are treated as the weights of the blurring kernel and the CSF matrix for being automatically learned. The spatial degradation procedure is implemented by a depthwise convolution layer, where the kernels for different spectral channel are imposed as the same and the stride parameter is set as the expanding scale factor, while the spectral degradation procedure is achieved with a pointwise convolution layer with the output channel 3 to produce the approximated HR-RGB image. With the learnable implementation of the degradation procedure, we construct an end-toend framework to jointly learn the specific prior of the target HR-HS images and the degradation knowledge, and build a high-generalized HSI SR system. Moreover, the proposed framework can be unified for realizing different versions of blind HSI SR by fixing the parameters of the implemented convolution as the known blurring kernel or the CSF, and is highly adapted to arbitrary observation for HSI SR. 3. A generalized internal learning method for HSI SR is proposed. Motivated by the fact that natural images have strong internal data repetition and the crossscale internal recurrence, we further synthesize labeled training triplets using the LR-HS and HR-RGB observation only, and incorporate them with the un-labeled observation as the training data to conduct both supervised and unsupervised learning for constructing a more robust image-specific CNN model of the under-studying HR-HS data. Specifically, we downsample the observed LR-HS and HR-RGB image to their son versions, and produce the training triplets with the LR-HS/HR-RGB sons and the LR-HS observation, where the relation among them would be same as among the LR-HS/HR-RGB observations and the HR-HS target despite of the difference in resolutions. With the synthesized training samples, it is possible to train a image-specific CNN model to achieve the HR-HS target with the observation as input, dubbed as internal learning. However, the synthesized labeled training samples usually have small amounts especially for a large spatial expanding factor, and the further down-sampling on the LR-HS observation would bring severe spectral mixing of the surrounding pixels causing the deviation of the spectral mixing levels at the training phase and test phase. Therefore, these limitations possibly degrade the super-resolved performance with the naive internal learning. To mitigate the above limitations, we incorporate the naive internal learning with our selfsupervised learning method for unsupervised HSI SR, and present a generalized internal learning method to achieve more robust HR-HS image reconstruction.
Creators : LIU ZHE Updated At : 2023-12-12 17:03:54
To investigate whether dantrolene (DAN), cardiac ryanodine receptor (RyR2) stabilizer, improves impaired diastolic function in an early pressure-overloaded hypertrophied heart, pressure-overload hypertrophy was induced by transverse aortic constriction (TAC) in mice. Wild-type (WT) mice were divided into four groups: sham-operated mice (Sham), sham-operated mice treated with DAN (DAN+Sham), TAC mice (TAC), and TAC mice treated with DAN (DAN+TAC). The mice were then followed up for 2 weeks. Left ventricular (LV) hypertrophy was induced in TAC, but not DAN+TAC mice, 2 weeks after TAC. There were no differences in LV fractional shortening among the four groups. Catheter tip micromanometer showed that the time constant of LV pressure decay, an index of diastolic function, was significantly prolonged in TAC but not in DAN+TAC mice. Diastolic function was significantly impaired in TAC, but not in DAN+TAC mice as determined by cell shortening and Ca2+ transients. An increase in diastolic Ca2+ leakage and a decrease in calmodulin (CaM) binding affinity to RyR2 were observed in TAC mice, while diastolic Ca2+ leakage improved in DAN+TAC mice. Thus, DAN prevented the progression of hypertrophy and improved the impairment of LV relaxation by inhibiting diastolic Ca2+ leakage through RyR2 and the dissociation of CaM from RyR2.
Creators : CHANG YAOWEI Updated At : 2023-12-12 11:39:35
ダントロレン(DAN)はRyR2のN末端ドメインLeu601-Cys620に直接結合し、RyR2の4量体構造を安定化させることにより、RyR2からの拡張期Ca2+漏出を防ぐ。以前我々は、RyR2へのCaM高親和性KIマウス(V3599K)を用いて、横行大動脈縮窄(TAC)による圧過負荷誘発性心肥大マウスモデルにおいてRyR2からのCaM解離を抑制することで、Ca2+漏出を防ぎ、左室リモデリングを抑制することを報告した。そこで本研究では、横行大動脈縮窄(TAC)による圧負荷誘発性心肥大マウスモデルにおいてダントロレンの慢性投与がCaMとRyR2の結合親和性を遺伝的に強化した場合と同様の機序で左室リモデリングを抑制するかを調べた。横行大動脈縮窄(TAC)による圧負荷誘発性心肥大マウスモデルを作成した。野生型マウスを、Sham群、TAC群、TAC-DAN群(ダントロレン20mg/kg/day腹腔内投与)の3群に割り付けた。ShamまたはTAC手術から8週後の生存率、心機能および組織評価、単離心筋細胞を用いたCa2+ハンドリング、RyR2-CaM結合性の評価を行った。TAC-DAN群はTAC群と比較し、TAC手術から8週後の生存率は良好であった(TAC群 49% vs TAC-DAN群83%)。また、心エコーと心筋組織においては、TAC群で認めた左室リモデリングは、TAC-DAN群で抑制された。TAC手術から8週後の単離心筋細胞ではTAC群で拡張期Ca2+スパーク頻度の増加およびRyR2-CaM結合親和性の低下を認めたが、TAC-DAN群ではそれが抑制された。我々の研究はダントロレンの慢性投与によりRyR2を安定化させ、RyR2からのCaM解離を抑制することで、RyR2からの拡張期Ca2+漏出を防ぎ、左室リモデリングが抑制され、予後が改善することを示した。
Creators : 矢野 泰健 Updated At : 2023-12-11 17:29:29
現在の日本は少子高齢化・医療高度化を背景に、要介護者数の増加、医療費増加、人材不足、医療格差などの医療課題に直面している。近年、人工知能(ArtificialintelligenceAI)技術やシステム医学に基づいたデータ駆動型医療の登場によって、これら課題を解決できる可能性が広がった。筆者自身は呼吸器診療に携わる立場でもあることから、呼吸器診療を支援する以下3つの医療AI技術を開発した。1つ目の技術では、副作用ビッグデータ(JapaneseAdverseDrugEventReportJADER)に基づきベイズ推定を行うことで、AUC0.93の精度で副作用の原因薬を推定できた。呼吸器領域では薬剤性肺障害など重篤な副作用があるが、本技術の臨床応用によって、副作用による健康被害の最小化と副作用管理の効率化が期待できる。2つ目の技術では、喘息患者の臨床データ(年齢、BMI、血中好酸球数、呼気NO値、増悪回数)をもとに教師あり機械学習を行うことで、喘息患者の気流閉塞の急速進行1秒量低下)をAUC0.85の精度で予測できた。実用化によって、早期の治療介入が必要な喘息患者を同定でき、重症化を防ぐための先制治療につながる。3つ目の技術では、喘息質問票(AsthmaControlQuestionnaire-5:ACQ-5)のデータを用いて教師なし機械学習を行うことで、症状から喘息病態である気流閉塞、2型気道炎症、増悪リスクを推定できた。従来、治療選択のため病態評価には専門的検査が必要だが、本技術ではACQ-5に含まれる喘息症状の評価のみから、個々の喘息の病態に応じた治療選択(個別化治療)につなげることができる。本技術は、発展途上国、過疎地域、プライマリケアの現場など、医療環境が不十分な地域で、適切な喘息治療薬の選択を支援できる。ひいては、医療格差の是正につながる可能性が期待できる。本研究で開発したAI技術の実用化によって、臨床現場における副作用管理、先制医療、個別化治療を支援する。これによって、健康寿命の延伸、医療費抑制を目指す。同時に、開発したAI技術によって専門医療の一部を補完することで、医療従事者の業務負担軽減と、医療格差の是正均てん化につながることが期待できる。
Creators : 濱田 和希 Updated At : 2023-12-11 17:08:12
細胞は熱ストレスなどのタンパク質毒性ストレスにさらされると、熱ショックタンパク質群(HSPs)を誘導することで適応する。この適応機構は熱ショック応答と呼ばれ、熱ショック転写因子HSF1によって主に転写レベルで制御される。活性化されたHSF1はHSP遺伝子プロモーターに存在する熱ショック応答配列(HSE)に結合し、メディエーターを含む転写開始前複合体を集積させることで転写を促進する。一般に、転写因子及びその調節因子は液―液相分離によってプロモーター上に凝縮体を形成すると考えられている。しかし、HSP遺伝子プロモーター上でも同様かどうかについては、凝縮体が微小であるために十分な解析ができていない。本研究では、ヒトHSP72プロモーター由来のHSEを多数連結したレポーター遺伝子をマウス細胞に導入した。このHSEレポーター遺伝子を持つ細胞に蛍光タンパク質mEGFPを融合したHSF1を発現させることで、熱ストレス条件下でHSF1凝縮体を可視化することに成功した。この人工的なHSF1凝縮体は部分的に液体様の性質を持つ、すなわち液―液相分離により形成されていた。また、大腸菌から精製したタンパク質を用いた実験から、HSF1の天然変性領域IDR)が相分離に寄与することも分かった。さらにこの実験系を用いて、HSF1凝縮体の形成が転写調節因子によって制御されるかを調べた。特に、熱ショック応答を促進するメディエーターの一つであるMED12に着目して解析したところ、MED12のIDRはHSF1凝縮体に集積すること、そしてMED12の発現抑制はHSF1の凝縮体形成を著しく抑制することが明らかとなった。本研究は、HSP72プロモーター上のHSF1凝縮体を解析する実験系を提示するとともに、それが転写調節因子によって制御されることを示唆する。
Creators : 岡田 真理子 Updated At : 2023-12-11 16:49:58
Creators : Han Jihae Updated At : 2021-06-11 20:38:01
Chromatography is considered as a key operation in the downstream process (DSP) of biopharmaceuticals, including proteins. Therapeutic proteins such as monoclonal antibodies (mAbs) with high economic values in the global market require immediate innovation in the purification step to adapt to the increased throughput from upstream. Authorities have also initiated changes toward a more modernized pharmaceutical manufacturing platform which is agile and flexible without extensive oversight. Instead of the conventional batch operation and empirical models, the design and application of in silico modeling and simulation for integrated multi-column processes to improve their performance in capture chromatography steps have been explored in the dissertation. Due to the fact that mechanistic models can reveal adsorption and mass transfer behaviors better in the chromatography compared to statistical models, mechanistic frameworks were applied in the study. Ion-exchange and protein A chromatography, the main categories of therapeutic protein chromatography were examined. With an example of oligonucleotides, the mass transfer phenomenon of biomolecules in different types of ion-exchange resins was explored by mechanistic models. The results demonstrate the effectiveness of modeling approaches to understand the chromatography process of biopharmaceuticals. By focusing on the DSP of mAbs, multi-column continuous chromatography was examined with IgG samples. The study covered the repeating batch to 4-column settings in the continuous periodic counter-current (PCC) chromatography, with development in modeling and simulation tools for process quantification and evaluation. Process performances including productivity, capacity utilization, and buffer consumption were investigated by simulations with the aim to increase productivities and lower buffer consumptions, which are the main bottleneck in the current DSP. The critical operation parameter, breakthrough percent (BT%) for column switching in PCC processes, requires the information from binding capacity, mass transfer, and non-loading operations. To obtain the optimal BT% under synchronized conditions, numerical solvers developed from mechanistic models were employed. It was found that over 20% improvement in buffer consumption and resin utilization can be observed in PCC processes while the same productivity as batch operation is maintained. Furthermore, regressive relations were developed for predictions of process performances and BT% based on the findings from PCC simulations. With high coherence in R2 over 0.95, the linear regression function can act as an accelerated method in the PCC process design. Finally, a new strategy of linear flow-velocity gradient (LFG) in the loading step was explored as a supplement to increase process efficiency. The method controls the total column capacity and the loaded amount as functions of time. Based on the relationship between the dynamic binding capacity and residence time, the gradient time of LFG was obtained. The optimal flow velocities and time gradients were examined by scanning through the range of applicable residence times. A case study of the 4-column PCC process is presented. By integrating a linear decreasing flow gradient in the PCC loading operation, the productivity has 1.4 times enhancement along with a 13% reduction in the cost of resin per amount of processed mAbs compared to constant flows. Undoubtedly, the next generation of DSP platform technology is directed toward continuous and integrated systems. Regarding the advantages in process performances and regulation perspectives, continuous manufacturing can advance development and manufacturing while assuring the product quality. The evolution in modeling and simulation enables faster development of in silico process prediction and evaluation. With the support from models, process design and optimization in chromatography can rise to the challenge.
Creators : Chen Chyi Shin Updated At : 2021-06-11 20:38:01
Phosphorus is an indispensable nutrient to sustain the daily life of all living things on Earth. However, the over-enrichment of the aquatic ecosystem with phosphorus leads to eutrophication, which is still a global environmental problem. More stringent regulations have been put in place for the limit of phosphorus discharge to address this problem and resulted in the removal of phosphorus removal becomes exceptionally crucial. Furthermore, phosphorus deposits are a non-renewable resource and forecasted to deplete until 2170, given the current usage and global population growth. Thus, the removal of phosphorus coupled with the recovery and reuse of phosphorus offer the best strategies to meet the future phosphorus demand. Accordingly, adsorption represents a fascinating separation technique for phosphate from water because of the possibility of phosphorus recovery. Moreover, this approach has many advantages, such as efficient, easy operating conditions, low sludge production, and the possibility of regenerating the adsorbent. Numerous attractive low-cost adsorbents have been studied for phosphate removal, one of which is layered double hydroxides (LDH). Unfortunately, a high phosphate adsorption capacity of LDH can generally be achieved by calcination, which increases the preparation cost of LDH. In this study, LDH is functionalized with amorphous zirconium (hydr)oxide to obtain enhanced adsorption capacity and eliminate the high-temperature requirement during the synthesis process. Although different treatment techniques have been developed to eliminate phosphorus contamination, including for wastewater treatment, treated water often fails to meet quality regulations. Amorphous zirconium (hydr)oxide/MgFe layered double hydroxides composites (am-Zr/MgFe-LDH) with different molar ratios (Zr/Fe = 1.5 2) were prepared in two-stage synthesis by the combination of coprecipitation and hydrothermal methods. The synthesis of the composite could eliminate the requirement of high-temperature calcination in the LDH for phosphate adsorption. Moreover, the phosphate adsorption ability of the composite was higher than that of the individual LDH and amorphous zirconium (hydr)oxide. The presence of amorphous zirconium (hydr)oxide increased the phosphate adsorption ability of composite at low pH. The adsorption capacity was increased by decreasing the pH and increasing the temperature (from 290 to 324 K). The bicarbonate (HCO3 ) was the most competitive anion for phosphate adsorption. The pseudo-secondorder model provided the best description of the kinetic adsorption data. Furthermore, the adsorbed phosphate was easily desorbed by 1 N and reused 2 N of NaOH solutions. The results suggest that the am-Zr/MgFe-LDH composite is a promising material for phosphate removal and recovery from wastewater. A Fixed-bed column has been considered an industrially feasible technique for phosphate removal from water. Besides the adsorption capacity, the effectiveness of an adsorbent is also determined by its reusability efficiency. In this study, phosphate removal by a synthesized am-Zr/MgFe-LDH in a fixed-bed column system was examined. The results showed that the increased bed height and phosphate concentration, and reduced flow rate, pH, and adsorbent particle size were found to increase the column adsorption capacity. The optimum adsorption capacity of 25.15 mg-P g^{-1} was obtained at pH 4. The coexistence of seawater ions had a positive effect on the phosphate adsorption capacity of the composite. Nearly complete phosphate desorption, with a desorption efficiency of 91.7%, could be effectively achieved by 0.1 N NaOH for an hour. Moreover, the initial adsorption capacity was maintained at approximately 83% even after eight adsorption-desorption cycles, indicating that the composite is economically feasible. The am-Zr/MgFe-LDH, with its high adsorption capacity and superior reusability, has the potential to be utilized as an adsorbent for phosphorus removal in practical wastewater treatment. The possible adsorption mechanisms of phosphate by am-Zr/MgFe-LDH were investigated via X-ray diffraction (XRD), Fourier transform infrared (FTIR), X-ray photoelectron spectroscopy (XPS), and pH at the point of zero charge (pHPZC) analyses. It was suggested that the high phosphate adsorption capacity of the composite involves three main adsorption mechanisms, which are the electrostatic attraction, inner-sphere complexation, and anion exchange, where the amorphous zirconium (hydr)oxide on the surface of the layered double hydroxides likely increased the number of active binding sites and surface area for adsorption. This study provides insights into the design of am-Zr/MgFe- LDH for phosphorus removal and recovery in a practical system.
Creators : ATIN NURYADIN Updated At : 2021-12-07 00:34:47