Technische Fakultät
Permanent URI for this collection
Browse
Recent Submissions
- Doctoral thesisOpen AccessKohärente optische Frequenzbereichsreflektometrie zur Lokalisierung und Geschwindigkeitsmessung optisch gefangener Mikropartikel in Hohlkernfasern für neuartige Sensorsysteme(2023) Köppel, Max; Schmauß, BernhardIn the past five decades, fiber optic sensor systems have become increasingly important for many applications in industry and for the monitoring of civil infrastructure. Improvements and new developments of laser sources, detectors and in the field of signal processing have repeatedly increased the performance and opened up new application scenarios. However, the established concepts for distributed optical fiber sensing are about to reach fundamental physical limits or have already reached them. In particular, no significant improvements of the spatial resolution can be expected anymore. This, however, is required for monitoring emerging technologies such as battery power storages or power electronics in a decentralized electricity grid. A new generation of optical fibers has the potential to significantly push the existing boundaries and enable new applications and concepts – in optical communication networks as well as for sensor systems. The latest generation of so-called anti-resonant or revolver hollow-core fibers differs fundamentally from conventional fused-silica step-index fibers – as it features a hollow (usually air-filled) core – leading to entirely different properties and characteristics (e. g. in regard to dispersion, non-linearities or losses). On the one hand, these fibers raise the expectations of even more powerful optical communication networks. On the other hand, they also enable entirely new sensor concepts. One such novel sensor concept, which is based on hollow-core fibers is a so-called flying particle sensor. Such a system is based on an optically evitated micro particle which is trapped in front of the fiber and subsequently propelled along the hollow fiber core. As it moves along the fiber, the particle is effected by environmental influences such as temperature, electric field or others. Depending on the particle properties and the measurement configuration, it is possible to reconstruct the measurands at the particle position by monitoring the particle speed, reflectivity, emission at a specific wavelength or the transmission through the fiber. Since such particles usually have a size of a few microns only, a very high spatial resolution can be achieved in principle. At the same time, the low fiber losses potentially enable measurement distances up to the kilometer range. For the implementation of such a sensor system, the particle localization and tracking plays a key role. In order to take advantage of the small particle size, a very accurate and precise localization is required. The low reflectivity of a microparticle makes a reliable detection challenging as it requires a very high sensitivity. At the same time, a sufficiently high repetition rate (about 10 Hz) is necessary in order to track dynamic particle movements. For a practical implementation, the maximum measurement range must cover several meters at least. Also, the localization system must be compatible with the optical trapping setup in order to allow for an easy integration without interfering. As a first step, existing reflectometry concepts were examined and their applicability for particle localization was assessed. Pulsed time domain systems can easily achieve very long measurement ranges but offer a limited spatial resolution too coarse for the intended application. On the contrary, interferometric concepts such as low coherence reflectometry or optical coherence domain reflectometry allow to achieve a very high spatial resolution and sensitivity simultaneously. However, the maximum measurement range is usually limited to several centimeters with these concepts. Based on coherent detection, coherent optical frequency domain reflectometry (COFDR) allows to achieve a high sensitivity as well. Using a wide frequency tuning range, it is possible to achieve a very fine spatial resolution. At the same time, the development of highly coherent tunable laser sources enabled measurement distances up to several kilometers. Because of these unique features and the high flexibility of such a system, COFDR appeared as the most promising approach to realize a particle localization system. A particluar focus of this work was the optimization with regard to sensitivity and spatial resolution in order to meet the challenging requirements. Another important part of this work is the analysis and compensation of effects caused by particle movements. Eventually, such a system was implemented and characterized. The experimentally determined standard deviation for the localization of a static Fresnel-reflection at an air-glass surface (reflectivity ≈3,3 %) was 70 nm. The two-point-resolution of ≈40 µm in air, allows resolving two targets even if they are very close to each other. This is important to reliably track several trapped particles spaced by a few millimeters only. This performance could only be achieved by developing and implementing the corresponding signal processing steps to correct for several disturbances and non-idealities. In order to track frequency deviations during the laser sweep, an auxiliary interferometer was set up. Based on the auxiliary data, the measurement data was corrected. Moreover, algorithms for dispersion correction were implemented and tested. Eventually, the localization of optically trapped microparticles using COFDR was demonstrated for the first time. In order to enable the tracking of propelled particles, the developed COFDR system was extended. First, analytic expressions for the calculation of position and speed from two measurements were derived. Furthermore, a refocusing algorithm was implemented in order to mitigate motion artifacts of moving targets. The resulting – so-called Doppler-COFDR – system allows to independently determine the position and speed of unformly moving particles. Adjusting the measurement bandwidth allows to balance the localization precision against the measurement rate. In order to improve the localization accuracy and robustness for accelerated targets, a Kalman-filter with an adapted kinematic model was implemented and tested. It was experimentally demonstrated, that such a filter yields an effective suppression of systematic errors which would otherwise occur for accelerated targets. Therefore, this allows to analyze dynamic particle movements as well. Eventually, the applicability of the developed COFDR system for a flying particle sensor was demonstrated with a temperature sensing experiment. For the speed based temperature measurements, an optically trapped microparticle was propelled through a heated section of the hollowcore fiber tracking its position and speed. The fiber temperature was then reconstructed from the recorded speed profiles exploiting the temperature dependence of the viscosity of the air inside the hollow core. Strong temperature gradients cause a thermal creep flow and consequently air turbulence inside the hollow fiber core. This causes speed over- and undershooting in such regions – an effect, which had already been noticed and described in literature before. This behavior was also observed in the conducted measurements with a good qualitative agreement, which affirms the capability of the developed system to track dynamic particle motions. For the temperature measurements, a standard deviation of 4,94 °C was achieved for temperatures up to 100 °C. The standard deviation for the entire measurement range up to ca. 200 °C was 10,3 °C. The chosen COFDR parameters yielded a measurement rate of 62,4 Hz, which resulted in one position and speed measurement approximately every 0,5 mm on average for the mean particle speed of ≈30 mm/s. This resembles a spatial resolution, which is significantly higher than that of the established fiber optic sensor systems for distributed temperature measurements. In conclusion, in this work a reliable system for localizing and tracking optically trapped microparticles inside hollow-core fibers was developed and tested successfully. This system allows to easily implement control loops to facilitate automated positioning of optically trapped microparticles and arbitrary fiber locations or movement along the fiber with constant speed, which requires rebalancing of the trapping power depending on the particle position due to fiber losses. Besides the presented speed-based temperature sensing, the developed COFDR system can also be used to realize other measurement principles such as temperature sensing based on the fluorescence lifetime of doped particles or electric field sensing using charged particles. The possibility to trap, move and track a number of particles in a bound array configuration also allows to develop novel sensing principles, which are based on the interaction between several particles. Therefore, the results of this work provide the basis for future development towards the practical use of flying particle sensor systems.
- Doctoral thesisOpen AccessOptimization of dynamic transfer function analysis for impedance spectroscopy and intensity-modulated photo spectroscopy for application to non-stationary and distributed electrochemical systems(2023) Schiller, Karl-Albrecht; Schmuki, PatrikIn the evaluation of electrochemical processes, it is popular to measure the electrical impedance of electrodes and cells. Kinetic knowledge, obtainable from the impedance spectra, can be used to extract application-relevant information. Examples are for instance the corrosion rate of materials or process details determining the efficiency of fuel cells or photovoltaic cells. This work aims on the improvement of the impedance related techniques with emphasis on the analysis procedure (Dynamic Transfer Analysis TFA). Moreover, it is proposed to widen the TFA tools for the application on photo-sensitive objects (Intensity Modulated Photo Spectroscopy IMPS). The author performed this work beside his profession as manager and head of research in his company Zahner-Messtechnik in Germany, Kronach. For many cases, application examples for the developed procedures are presented. Usually, they grew in cooperation with universities and external research institutes. Hereby, the leading role was performed by the Department of Materials Science of the Friedrich-Alexander University, Germany, Erlangen, with the chair for surface science and corrosion LKO as a mentor of this work. In analogy to the requests brought to the author from the cooperation partners, the studies are diverse, comprising tasks of verification, recovery and analysis of measured transfer function data, the time dependency of quasi-steady state systems as well as the modeling of photo-electrical systems such as solar cells. Procedures for the estimation of the reliability of experimental TFA data, the determination of the significance and uncertainty of electrochemical parameters (e.g., the electrode polarization resistance or the characteristic values of the diffusion involved), evaluated by the TFA can be considered results of this work. The temporal change of an electrochemical system during the recording time of a spectrum may be a serious experimental obstacle, as drift collides with one of the basic requirements for any TFA, which is the stationarity during the measurement time. Theoretically, the well-known Kramers-Kronig-Transform allows for the detection of parasitic errors caused by the loss of causality due to temporal change, but its computation is questionable. The Logarithmic Hilbert Transform for Two-Poles, referred to as ZHIT, was developed by the author and asserts itself in practice for its uncomplicated numerics and its ruggedness against the limited frequency range of experimental data. Even TFA-data affected by time-drift may be analyzed in a comparably accurate manner. For this, several spectra have to be acquired sequentially and data sets belonging to certain points in time need to be calculated by means of a temporal interpolation. This is demonstrated alongside the ZHIT on non-steady-state systems like corroding Iron in an electrolyte which contains CO2, a fuel cell which is gradually poisoned by carbon monoxide present in the fuel gas, another fuel cell, where the cathode is flooded by reaction water and organic protective coatings on metal under water uptake. The insight into devices for the electrochemical energy conversion like batteries and fuel cells, or into modern solar cells often requires the knowledge of the dynamic properties of distributed systems, as represented by porous structures and non-homogeneous dielectrics, and of their characteristic relaxation behavior. It appeared appropriate to address such phenomena in this work. The temperature relaxation behavior of electrochemical objects like fuel cells or batteries under the influence of electrical current heat is not so commonly known. Therefore, exemplary studies on temperature dependent resistors were performed in this work. As an outcome, the theoretical formulation of the "General Relaxation Impedance" will be presented and discussed together with the experimental results. It is a well-established practice to model the impedance of electrodes by means of electrical equivalent circuits. In contrast, accurate modeling of photo-electrical active objects like solar cells is usually done by individual analytical computations. If one wants to transcribe the modular way of equivalent circuit analysis also to dynamic photo-electric transfer functions, one may introduce a localized photocurrent source as a network component. The author describes his technical realization in an analysis program. A special fitting feature of this program is the so called "TRIFIT-algorithm". It allows the joint best model fit of the three types of experimental data from impedance, dynamic photocurrent- and photo voltage efficiency spectra, which were acquired all from the same system state of the object under test. Compared to the analysis of the isolated spectra, like it is popular in EIS, the relation between the number of known observables and the number of free parameters is increased. This leads to a model close to definiteness with less ambiguity. Experiments in this field were performed by the author on dye-sensitized solar cells "DSSC" based on TiO2 -nanotubes and on Tantalum oxide films created by anodic oxidation of Tantalum metal. By means of the TRIFIT-algorithm the "Bay-West"-model of a porous distributed photocurrent generation, published by a Danish group, could be verified for the photo-anode of the DSSC. Besides this, in cooperation with colleagues from the LKO institute several samples of thin film solar cells were examined. Their charge carrier time constants for diffusion and recombination, evaluated as dynamic efficiency from photocurrent- and photo voltage spectra, could be assigned to the parameters of synthesis and the content of sensitizing dye of the samples. The experimental impedance-, dynamic photocurrent- and photo voltage efficiency spectra of the Tantalum oxide films could be modelled over a wide range of preparation parameters of the anodic oxidation by means of a simple electrical equivalence circuit. It is primarily composed of the photocurrent source, which is embedded in the model of the oxide, containing three characteristic conduction areas with p-, intrinsic- and n-conductivity. This "PIN-model" is presented in the chapter about inhomogeneous dielectrics. For the determination of reliable dynamic photo-voltage and photocurrent spectra, an instrumentation of high-quality standard is necessary. Poor reproducibility or missing calibration of the used light intensity, non-linearity of the light modulation and frequency dependent artefacts have a negative effect on the interpretation of the experimental results. A development of the author, the "Controlled Intensity Modulated Photo Spectroscopy" CIMPS is able to enhance the accuracy of photo-electrochemical transfer function measurements significantly, compared to the literature common usage of current controlled LED as light source. The CIMPS principle will be outlined here, too.
- Doctoral thesisOpen AccessMolecular Communications System Design based on Magnetic Nanoparticles(2023-12-05) Wicke, Wayan; Schober, RobertInspired by the communication mechanisms used in natural systems, molecular communication (MC) is a new mode of communication for novel applications in biomedical or environmental engineering where electromagnetic waves, used in conventional communication systems, would be unsuitable. In contrast to conventional communication systems employing electromagnetic waves to carry information, e.g., by modulating their amplitude, frequency, or phase, MC systems employ signaling molecules where information may be embedded in the properties of the released particle ensemble, e.g., their number, type, and time of release by the transmitter. As a novel degree of freedom, the main objective of this dissertation is the design and analysis of MC systems employing magnetic nanoparticles (MNPs) as information carriers in a fluid environment. Thereby, the main advantage of MNPs composed of biochemical molecules is their magnetic property which allows them to be actively guided by a magnetic field as well as passively detected from the outside. Moreover, due to thermodynamic effects MNPs don’t agglomerate outside a magnetic field and can be designed for various applications, e.g., by tuning of their size. Motivated by these features, in this dissertation, we study how MNPs can be utilized for MC to cope with signal decay, inter-symbol interference (ISI), and potential time synchronization errors, as described in the following. MNP-based MC in Bounded Environments: The disordered motion of signaling particles (diffusion) is one of the main impairments in MC systems. To mitigate the resulting signal attenuation, additional particle transport mechanisms directed towards the receiver are required. One directed transport mechanism, which is known from nanomedicine and which can be controlled externally, is the guiding of MNPs by a magnetic field. In this dissertation, we model the drift velocity of MNPs subject to a magnetic force in a bounded drift-diffusion system and study its beneficial impact on MC performance. MC Driven by Laminar Flow: In a fluid environment, particle transport over larger distances is typically realized by means of fluid flow using vessels. For such scenarios, MC models usually assume a spatially homogeneous drift velocity (uniform drift) whereas most common flow systems exhibit a non-uniform (laminar) flow profile. In this dissertation, we study the relative importance of the particle transport by laminar flow and diffusion for MC in a cylindrical environment, particularly for scenarios where the particle transport is dominated by fluid flow. Experimental Validation of MC Driven by Laminar Flow: While theoretical work is insightful, experiments are finally needed to validate communication models. In MC, several testbeds have been proposed which have been modeled either heuristically or in a data-driven fashion. For this dissertation, we consider data from an MC testbed using MNPs in a laminar flow system which has been developed at FriedrichAlexander-Universität Erlangen-Nürnberg (FAU). After extending our theoretical flow modeling framework based on insights gained from experimental data, we find that the experimental results favorably fit with the theoretical flow-driven MC model developed in this dissertation. Pulse Shaping for MC via Particle Size: Next to ISI and signal attenuation, a fundamental impairment for communication are time synchronization errors. While time synchronization schemes can mitigate timing errors to some degree, communication designs robust to residual timing errors are necessary. One fundamental method for communication design is pulse shaping, i.e., optimization of the end-to-end received signal by the deliberate design of the transmitter. In this dissertation, we explore using the size of the signaling particles as a degree of freedom for pulse shaping and provide a framework to effectively select a particle mixture comprising differently sized particles at the transmitter. It is noted that for the above modeling scenarios, we study the relevant physical phenomena and derive closed-form expressions for the end-to-end channel impulse response (CIR). All novel CIR expressions are validated by particle-based simulation (PBS) and communication performance is evaluated in terms of the symbol error rate (SER) for both simulated and, where applicable, experimental data.
- Doctoral thesisOpen AccessEstimable and Scalable Coordination of Multithreaded Processes(2023-11-30) Reif, Stefan; Schröder-Preikschat, WolfgangEmerging application scenarios like self-driving cars and the Internet of Things need computing systems that provide high computation performance with low power draw under timeliness constraints. Due to their power and performance requirements, these systems have to utilise highly-parallel hardware architectures. However, it is difficult to analyse the resource demand (i.e., energy and latency) of individual operations on conventional parallel systems. This resource-unawareness often prohibits highly-parallel systems from application in resource-critical environments, in particular, where response-time or energy-related constraints have to be satisfied. This thesis introduces the concept of estimability of a computing system, and system-level software in particular, that makes resource-awareness in parallel computing systems feasible. In particular, this concept enables a resource-demand analysis with high accuracy despite little effort. The motivation behind estimability are various frequently-applied design patterns in parallel systems that cause an accumulation of seemingly minor disturbances, such that the aggregation of all interferences is performance-critical. In consequence, an accurate resource-demand analysis of such a system is infeasible because all possible combinations of disturbances have to be considered—in practice, resource-demand analyses are therefore tedious and inaccurate. This thesis introduces several approaches to avoid these problematic patterns, in order to prevent the accumulation of disturbances. Systems have to be designed specifically to be estimable, that is, to allow for a resource-demand analysis that is accurate despite requiring little effort. This idea leads to a system architecture based on partitions, where each partition can be individually analysed with relatively little effort. To collectively satisfy performance requirements, however, these partitions have to cooperate, by means of communication. Unfortunately, communication is traditionally one of the patterns that cause the accumulation of disturbances. This thesis therefore introduces multiple means for coordination that limit the potential accumulation of interferences. In consequence, patterns of accumulating disturbances can be broken up, and an analysis can be much more accurate (as there is only little variation in resource demand) with significantly reduced effort (as interferences between partitions are avoided, despite communication). This thesis introduces approaches that improve the estimability of computing systems, with focus on methods for coordination between cooperating partitions. Several approaches operate by construction. There, the system is specifically designed in a way that improves estimability. In particular, this thesis introduces a variety of novel coordination mechanisms where potential interferences are bounded, and disturbances cannot accumulate arbitrarily. Further approaches are based on adaptation. They apply machine-learning techniques to modify low-level components at run-time, in order to provide an estimable behaviour to semantically higher levels. The practicability of adaptation-based approaches is demonstrated on the basis of a real-time communication system for the Internet of Things. Finally, this thesis discusses the resource demand of suitable machine-learning techniques for system-level adaptation, considering both hardware and software. Since these techniques guide adaptation mechanisms, their resource demand has to be estimable as well, to enable whole-system estimability.
- Doctoral thesisOpen AccessAnalytisch-numerische Magnetfeldberechnung im Luftspalt elektrischer Käfigläufer- Asynchronmaschinen(2023) Stiller, Matthias; Hahn, IngoElektrische Käfigläufer-Asynchronmaschinen zählen zu den robustesten Maschinenarten und bilden einen Großteil der Traktionsantriebe. Ihre Funktionsweise basiert auf dynamisch induzierten Rotorströmen und geht mit dem charakteristischen Schlupf einher. Asynchronmaschinen unterliegen deshalb einer Vielzahl verschiedenfrequenter Wellen, die Betriebs- und insbesondere Verlustverhalten prägen. Moderne Effizienzanforderungen setzen eine Berücksichtigung dieser Effekte bereits im Designprozess voraus. Die Berechnung und Auslegung elektrischer Maschinen stützt sich heutzutage meist auf die Verwendung der Finite-Elemente-Methode. Als numerische Berechnungsmethode basiert sie auf der Unterteilung eines elektromagnetischen Problems in viele diskrete, korrelierende Gebiete. Genauigkeit, aber auch Rechenzeitbedarf skalieren mit der Anzahl an Gebieten. Zwar können so auch im Hinblick auf das Schwingungsverhalten elektrischer und magnetischer Größen hochgenaue Lösungen ermittelt werden, allerdings ist eine Schlussfolgerung über Ursache und Kreuzwirkung verschiedener Wellen nicht möglich. Zudem stellen sich meist inakzeptabel hohe Rechenzeiten ein. Als vielversprechende Alternative bietet sich die Verwendung der Feldtheorie (auch Oberwellentheorie, Spulenmodell) an. Als analytische Methode unterliegt sie jedoch diversen Vereinfachungen und Einschränkungen im Hinblick auf geometrische Komplexität und nichtlineares Materialverhalten. Das übergeordnete Ziel der vorliegenden Arbeit ist die akkurate und schnelle Berechnung von magnetischen Feldverläufen innerhalb der Maschine. Im ersten Schritt wird die Grundform der Feldtheorie ausführlich mathematisch hergeleitet. Es ergibt sich ein Formelsatz zur Berechnung von Selbst- und Koppelinduktivitäten von Stator und Rotor. Diese können unter Zunahme von ohmschen Elementen und den parasitären Effekten von Streuung und Stromverdrängung in einem Gleichungssystem zusammengeführt werden. Das Gleichungssystem besteht aus je einer Gleichung für jede vorhandene Schwingung in Stator und Rotor und ermöglicht eine ganzheitliche Berechnung des elektrischen Verhaltens der Maschine. Ausgehend von den ermittelten Stromflüssen in beiden Maschinenteilen, können Rückschlüsse auf das magnetische Feld im Luftspalt der Maschine gezogen werden. Zur Verifikation der Simulation wird mittels flexiblen Leiterplatten eine Magnetfeldmessung im Luftspalt der Maschine konzipiert. Ein Vergleich zeigt, dass insbesondere zwei Grundvoraussetzungen der Feldtheorie für Abweichungen zwischen Simulation und Messung verantwortlich sind. Zum einen geht die Feldtheorie von einem glatten Luftspalt aus und vernachlässigt dabei den verzerrenden Einfluss von Nuten auf den Feldverlauf. Zum anderen nimmt die Feldtheorie vereinfachend an, dass Eisengebiete perfekte magnetische Leiter darstellen. Dabei vernachlässigt die Feldtheorie das nichtlineare Sättigungsverhalten von ferromagnetischen Stoffen. Als erste Erweiterung wird die Feldtheorie so verändert, dass anstelle eines glatten Luftspalts mit konstanter Länge eine ortsaufgelöste Funktion in Form einer Fourierreihe zur Nachbildung der Nutung angesetzt werden kann. Der Luftspalt erfährt eine lokale Aufweitung. Zur Abbildung der effektiven Luftspaltlänge werden verschiedene Funktionsverläufe gegenübergestellt. Ein erneuter Vergleich zwischen Simulation und Messung führt für Betriebspunkte abseits von Sättigung auf eine gravierende Verbesserung der Übereinstimmung. Als zweite Erweiterung wird das nichtlineare Verhalten ferromagnetischer Materialien in die Feldtheorie eingebracht. Da eine Sättigung als Verringerung magnetischer Leitwerte gedeutet werden kann, wird diese als fiktive Aufweitung auf den Luftspalt abgebildet. Auch hier wird eine ortsaufgelöste Funktion in Form einer Fourierreihe angesetzt, um einen variierenden Sättigungszustand entlang des Umfangs der Maschine vorsehen zu können. Die Berechnungsvorschriften der Induktivitäten verändern sich aufgrund der neu eingebrachten Luftspaltabhängigkeit maßgeblich. Die analytische Herleitung erlaubt zudem die Identifikation sättigungsbedingter Schwingungen, die zuvor in den Messungen beobachtet werden konnten. Die Komplexität der Funktion wird durch die maximale Ordnung der Fourierreihe beschränkt. Die Wahl der Ordnung beeinflusst aufgrund der hinzukommenden Schwingungen zudem die Größe des Spannungsgleichungssystems und damit den Rechenzeitaufwand. Für die Parametrierung der Sättigungsfunktion wird ein iterativer Algorithmus anhand der Magnetisierungskennlinie des Materials eingeführt. Es erfolgt ein Vergleich zweier Funktionsverläufe verschiedener Ordnung. Die Simulation erfolgt für einen Betriebspunkt in Abhängigkeit der gewählten Komplexität binnen weniger Sekunden. Ein abschließender Vergleich zwischen Simulation und Messung zeigt, dass durch den Einsatz einer solchen Funktion auch von Sättigung betroffene Betriebspunkte akkurat abgebildet werden können. Für Betriebspunkte mit höchstem Sättigungsgrad bringt eine komplexere Funktion mit höherer Ordnung eine zusätzliche Verbesserung.
- Conference objectOpen AccessNIR-Emission Spectroscopy for Local Temperature Measurements in Premixed Hydrogen/Air Flames(2023) Schmidt, Nikolas; Braeuer, Phillipp; Grauer, Samuel; Bauer, Florian; Will, StefanAdopting hydrogen fuel for combustion at scale requires a deeper understanding of the flame behavior with respect to the combustion properties of H2 and type of burner used. Non-invasive optical diagnostics have the potential to enhance our understanding of H2 combustion. For instance, near-infrared emission spectroscopy yields path-averaged information that can be employed to characterize the temperature field. In H2 flames, the emission spectrum of water vapor can be used to quantify the temperature field and elucidate the underlying physical processes. In the present study, temperature is determined throughout a premixed turbulent H2/air flame via NIR spectra, accounting for effects of the instrument and experimental configuration.
- Doctoral thesisOpen AccessSecurity and Privacy of Cryptocurrency Signatures(2023) Ronge, Viktoria; Schröder, DominiqueCryptocurrencies have become an indispensable part of today’s world. In their various forms they play an increasingly important role, both as a fi nancial investment product and as a means of payment. However, since one does not directly see the parties involved in a transaction but at most unique identifi ers, there exists still the erroneous assumption that cryptocurrencies are anonymous per se. This thesis deals with diff erent forms of anonymity within the system of cryptocurrencies themself and what has to be considered when trying to deanonymize. Here, the term anonymity is used more broadly to refer not only to the fact that it is not known who the actual sender (or receiver) in a given transaction is, but also that actually existing connections between diff erent identifi ers cannot be detected. The focus is on the cryptocurrencies Bitcoin, Monero and Zcash. Bitcoin was chosen because it is the currency with the highest market capitalization of all, while Monero and Zcash have the highest market capitalization among such currencies, which by their construction ensure a certain degree of anonymity for both sender and receiver. In this context, Chapter 4 first deals with various assumptions that are applied in deanonymization. For these, a taxonomy with four categories is designed which allows to make statements about the reliability of the individual groups as well as to indicate factors which infl uence the reliability of at least a large part of the assumptions within a group. Motivated by selected considered assumptions, a definition for the measure of anonymity for cryptocurrencies is established in Chapter 5. This measure especially is valid for cryptocurrencies which use, like Monero, ring signatures to hide the actual senders of a transaction within a larger anonymity set. It is used to evaluate various concrete methods of anonymization, one of which is also constructed in Chapter 5. Chapter 6 concludes the main part of this work by constructing a protocol that uses Threshold Signatures. Theses allow a subset of fi xed size of a pre-selected group to collectively generate a signature (and in Bitcoin use it to authorize a transaction). Such a signature can be prevented if only a single member concretely chosen to generate the signature is malicious. The new protocol overcomes this problem and allows a correct signature to be generated even with a maximum allowed number of malicious group members present.
- Doctoral thesisOpen AccessModeling and Verification of 4H-SiC Trench MOS Integration using Trench-First-Technology(2023) Lim, Minwho; Erlbacher,, TobiasThis work is dedicated to the investigation of process integration techniques. It aims at enhancing the electrical performance of SiC MOS devices, with a focus on the 4H-SiC double-trench MOSFET. The research explores the implementation of a particular manufacturing process known as “trench-first” process, where the formation of the trench structure precedes the implantation steps. This process sequence offers advantages in cases where equipment limitations hinder precise control over trench etching into the 4H-SiC epilayer due to doping concentration dependencies. The trench-first process facilitates an optimally rounded trench shape achieved through dry etching and subsequent annealing at 1400 °C in a hydrogen ambient independent of doping concentration. Moreover, this work investigates a self-aligned implantation process approach designed to overcome the limitations of mask-alignment of lithography. In this process, polysilicon is deposited and planarized within the trenches, and subsequent oxidation selectively oxidizes solely the polysilicon, leaving the 4H-SiC nearly unaffected. As a result, a combined polysilicon and oxide cap layer is formed within the trenches, serving as a self-aligned mask for the implantation process. The main objectives of this work involve investigating four key parameters using fabricated 4H-SiC double-trench MOS devices produced through trench-first process. Each parameter is evaluated through a combination of simulation and fabrication, enabling a comprehensive comparison of the results. The first parameter under investigation is integration density. Here, analytical approximation and numerical simulation reveal that reducing the cell pitch improves both on-resistance and breakdown voltage. However, the inherent geometrical properties of the double-trench MOSFET and equipment limitations impose constraints on the minimum achievable cell pitch. Consequently, devices with cell pitches of 3.5 µm, 4.0 µm, and 5.0 µm are fabricated, and it is observed that the devices with a cell pitch of 3.5 µm exhibit the best on-resistance of 5.5 mΩcm2 and a breakdown voltage up to 1089 V. The investigation then focuses on the p-well implantation parameters for the 3.5 µm cell pitch devices. The p-well region directly influences the doping concentration and length of the vertical channel in the 4H-SiC double-trench MOSFET. Simulation results highlight that excessively low aluminum doping concentrations or shallow p-well regions are prone to premature reach-through breakdown, whereas high aluminum doping concentrations lead to increase of on-resistance. As a result, it is proposed to employ appropriate p-well implantation parameters, such as a box-like plateau doping concentration of 5.0×1017 cm-3 or 7.0×1017 cm-3 with a depth of 0.9 µm, which yield the best on-resistance values of 3.4 mΩcm2 and 2.5 mΩcm2, respectively, along with breakdown voltage of 1120 V and 1060 V. Furthermore, the impact of tilted p+-implantation on breakdown capability is explored in terms of shielding efficiency. While on-resistance is not significantly affected, a 30° tilted implantation demonstrates an improvement in breakdown voltage by approx. 200 V. This improvement is attributed to the increased effectiveness of the shielding effect, which is achieved through enhanced lateral distribution of aluminum dopants. Another aspect investigated in this work is the stability of the gate oxide. To assess this, trench MOS capacitors are fabricated and evaluated. The gate oxide is formed using the LPCVD-TEOS layer as the chosen dielectric material. Post-deposition annealing is conducted in a nitric oxide ambient at 1250 °C for 1 or 2 hours, following established practices for planar MOS capacitors. The trench MOS capacitors, employing both the TEOS layer and the post-deposition annealing process, exhibit a reduced dielectric breakdown field of 25-35% compared to planar MOS capacitors due to the electric field crowding at trench bottom edges. Additionally, to evaluate the interface state density on the trench sidewalls, a specific trench MOS capacitor with approx. 150 nm oxide thickness on the mesa and trench bottom, and approx. 45 nm oxide on the trench sidewalls is fabricated. The interface state density of the trench sidewalls, approx. 4.6×1011 eV-1cm-2, shows only a slight difference compared to planar device, which is approx. 4.1×1011 eV-1cm-2, near the conduction band edge. In summary, this work showcases the successful utilization of the trench-first process in fabricating the 4H-SiC double-trench MOSFET. Through meticulous optimization of key parameters such as cell pitch, p-well implantation, and tilted p+-implantation, notable improvements in the electrical performance of 4H-SiC double-trench MOS devices have been achieved. Moreover, the trench MOS capacitor serves as a valuable tool for evaluating the suitability of gate oxides for 4H-SiC trench MOSFET applications.
- Doctoral thesisOpen AccessWind Noise Analysis, Synthesis, and Reduction for Speech Enhancement Using Compact Microphone Arrays(2023) Mirabilii, Daniele; Habets, EmanuëlWind noise refers to random fluctuations of air pressure induced by a wind stream and captured by microphones. In contrast to acoustic noise, wind noise is not generated by propagating sound waves but from pressure fluctuations caused by turbulent air motion. Turbulence results in low-frequency rumbling distortions in audio recordings. Such artifacts are unwanted as they can significantly degrade desired acoustic signals in terms of quality and, in the case of speech, intelligibility. In addition, wind noise can represent a danger for hearing aid users since it might mask safety-critical sounds (ambulance, alarm) and reduce spatial cues. Mechanical solutions to counteract wind noise have been designed for large capsule microphones, i.e., so-called windscreens can diffuse and redirect the turbulence by covering the microphone with a foam hood. However, windscreens are unsuitable for compact devices like smartphones, wearables, action cameras, and hearing aids, as they reduce their usability and portability. For this reason, digital signal processing techniques are preferred to reduce wind noise and enhance the desired signal. Standard noise reduction techniques assume a stationary noise floor that varies slower than, e.g., speech. Hence, the noise statistics are commonly estimated during speech absence by exploiting voice activity detection or speech presence probability. However, wind noise is highly unpredictable and heavily non-stationary due to abrupt wind intensity changes, so that common noise reduction techniques yield unsatisfactory performance. Therefore, wind noise reduction methods have been developed in the past decades based on contrasting temporal and spectral characteristics of acoustic and air turbulence-induced signals. In addition, the miniaturization and the integration of multiple microphones in audio devices enabled the design of multi-channel processing methods to attenuate wind noise. Such methods can combine spatial and spectral enhancement and provide increased performance compared to single-channel methods. Most multi-channel wind noise reduction approaches are derived assuming that the wind noise is spatially uncorrelated, and exploit the coherent nature of propagating acoustic waves to estimate the desired signal and wind noise statistics. This assumption can, however, be violated when microphone arrays with sufficiently small inter-microphone distances are employed and could therefore limit the reduction performance. In this thesis, we present novel contributions to the analysis, synthesis, and reduction of wind noise measured with closely spaced microphones. Furthermore, we propose methods for estimating the wind speed and direction using a compact microphone array. In particular, we show how and under which conditions multi-channel wind noise can be correlated. Similar to acoustic fields that exhibit a spatially diffuse coherence model, we approximate the spatial coherence of wind noise contributions with a semi-empirical model, namely the Corcos model. According to the Corcos model, wind noise propagates across space at nearly the wind speed and in the same direction. However, the wind noise coherence decreases anisotropically, i.e., at different rates along the streamwise (parallel to the wind direction) and the spanwise direction (orthogonal to the wind direction). We validate the Corcos model with wind noise data collected indoors (wind tunnel) and outdoors (atmospheric wind). In addition, we show that specific temporal and spectral characteristics are related to the flow speed. As a first main contribution, we combine the abovementioned observations to design a multi-channel wind noise generation framework. The synthetic generation of noise samples facilitates developing and evaluating noise reduction techniques in a controlled environment. In fact, wind noise is generally difficult to isolate from outdoor recordings, where different acoustic sources can be simultaneously active. Furthermore, it reduces to a large extent the time necessary to collect a sufficient amount of data. The proposed generation method takes wind speed and direction profiles as input and yields synthetic wind noise signals exhibiting a spatial coherence according to the Corcos model. In addition, we propose a multi-channel wind noise detector based on the contrasting spatial characteristics of speech and wind noise, and three methods to enhance the simulation of acoustic or turbulence-induced signals with a predefined spatial coherence. The second main contribution is the design of two methods for estimating either or both the wind speed and direction based on microphone signals measured with a compact array. Although conventional instruments can achieve high accuracy, employing closely spaced microphones has many advantages, e.g., integrability, high scalability, and a cost-contained implementation. As the Corcos model depends on the wind velocity, the spatial coherence of wind noise provides information on the sought quantities. We first present a deep learning-based method to infer the wind speed consisting of a feedforward neural network trained with synthetic wind noise correlated according to the Corcos model. Pair-wise spatial coherence functions are used as an input feature to obtain a wind speed estimate as output. We then present a method to resolve the speed and direction by fitting the measured spatial coherence to the analytical Corcos model in the least-squares sense. The methods are evaluated using data collected indoors and outdoors and labeled with an ultrasonic anemometer.As a third main contribution, we propose a multi-channel wind noise reduction method for closely spaced microphones based on a parametric multi-channel Wiener filter. In contrast to the well-established assumption of uncorrelated wind noise in multi-channel recordings, we assume a non-zero spatial coherence at low frequencies. For this reason, we recursively estimate the spatial coherence matrix based on the noisy microphone observations through an expectation-maximization approach. In addition, we prove the equivalence of two recently developed noise power spectral density estimation methods when uncorrelated wind noise is assumed. We propose an approximation of both estimators, which is independent of the speech propagation vector under specific conditions. A frequency-dependent wind noise indicator, namely the difference-to-sum power ratio, is used as a parameter that trades off speech distortion and noise reduction in a parametric Wiener postfilter. An evaluation of improvements in speech quality, signal-to-noise ratio, and intelligibility is carried out using both simulated and measured wind noise samples in comparison to an existing method. Finally, we present a method to reduce wind noise in B-format signals that preserves the spatial distribution of the noise field in the surround format after the reduction step. An omnidirectional-to-dipole power ratio is formulated and employed as a trade-off between the desired signal distortion and noise reduction.
- Doctoral thesisOpen AccessModelling nanomechanical effects in advanced lithographic materials and processes(2023) DSilva, Sean; Erdmann, AndreasOptical projection lithography involves the transfer of a photomask pattern onto a polymer material known as a photoresist. The diffracted light from the exposed photomask is projected onto the photoresist to produce the desired pattern. Depending on the application, a positive-tone development (PTD) or negative-tone development (NTD) process is employed, where either the exposed or unexposed regions are washed away respectively. It is very important to investigate the various chemical, optical, and mechanical effects taking place within the photoresist polymer, since lithography is very much dependent on the interaction between these effects. The use of chemically amplified resists (CARs) has been instrumental in the reduction of printed feature sizes seen over the past few decades. These resists with their superior resolution have enabled high volume production and continue to be a cost-effective option despite the challenges involved. They are, however, prone to shrinkage and deformation in the exposed areas leading to inaccuracies in the critical dimensions (CDs). This makes the accurate pattern transfer from the photomask to the silicon wafer difficult and is especially problematic in the negative-tone development (NTD) process where the exposed regions are left behind. Moreover, resist defects in modern extreme ultraviolet lithography (EUVL) are random events that do not scale down with the feature size and in turn cause catastrophic failures such as pattern collapse and feature distortions during manufacturing. All these undesirable effects observed during photoresist processing make modelling and simulation imperative to better understand and help mitigate/correct them. The main goal of this thesis is to individually model and simulate each of the effects contributing to the overall deformation of the resist mainly during the exposure, post expose bake (PEB), chemical development and resist rinsing. The physical models developed in this thesis serve as a baseline for optimizing and improving the current generation of lithography simulators. Also, they show the impact of varying the various physical parameters including the thermal and material properties of the resist on the overall deformation of the resist pattern. A finite element method (FEM) based model is developed to help simulate the shrinkage and volume losses seen in negative-tone development (NTD) resists. This new model uses a relational principle where a photocleavable polymer group concentration is directly related with the thermal expansion coefficient during the PEB step. The most significant shrinkage occurs during this step and the critical dimension (CD), height and volume of the final photoresist profile are greatly impacted. The proposed model helped simulate the shrinkage phenomenon effectively in numerous use cases and was validated using experimental data. The deformation during the post exposure bake (PEB) also leaves a certain amount of stress and strain within the resist bulk which contributes to further deformation during and after the development step. This effect is neglected in current lithographic simulators which can make accurate pattern prediction difficult. A modified development rate model is introduced in this thesis that captures the influence of volumetric strain on the overall development rate, which helped simulate and validate the interaction of mechanical and optical proximity effects observed in experimental data. After development, there is a change in boundary conditions resulting in a free-standing feature. This free-standing feature, depending on its dimensions, shape and feature density can begin to relax due to a gradual decrease in the residual stresses. A rigorous modelling procedure was developed to help predict the extent of sidewall angle and CD variations occurring. After the chemical development, a rinsing of the resist surface is carried out to get rid of the residual developer left behind. Uneven drying of the rinse liquid due to the pattern shape can cause an imbalance in the capillary forces and eventually lead to collapse. This effect is simulated and the influence of dose and focus variations on pattern stability is studied. The presence of an underlayer in EUVL can make the pattern unstable and cause resist debonding or delamination due to the lack of adhesion between the resist and underlayer. This thesis presents a modelling procedure to simulate the resist debonding effect effectively and explains the importance of contact stress on pattern stability. Localized regions of higher aspect ratios and higher feature densities arising due to line width roughness (LWR) can make the modelling of collapse and feature bending much more challenging. To alleviate this issue, a new machine learning based approach was implemented to predict collapse probabilities for resists with varying degrees of roughness emanating due to stochastics. The stability of simulated rough patterns in EUVL was not investigated in past literature. The approaches introduced in this thesis can be well integrated into lithographicsimulators to get a complete understanding of pattern stability.
- PreprintOpen AccessMixture Formation Analysis for Diesel, n-Dodecane, RME, and HVO in Large-Scale Injector Nozzles(SAE International, 2022-06-14) Fajri, Hamidreza; Clemente Mallada, Rafael; Riess, Sebastian; Strauß, Lukas; Wensing, MichaelMomentum conservation is a principle rule that affects the behaviour of vapour jet and liquid spray penetration. The air entrainment and mixture formation processes are dominated by the momentum transferred from the fuel to the ambient gas. Thus, it is a significant factor in the development of spray and jet penetration. This mixture formation process is well described for small-scale passenger car injectors; however, it has to be investigated in more detail for large-scale injector nozzles. The current work provides qualitative and quantitative results of spray and jet parameters in a constant volume combustion chamber (CVC). Two optical methods have been utilized to evaluate spray and jet details: Schlieren photography as a method to visualize the jet penetration and cone angle as well as Mie scattering for the phase change evaluation and the determination of liquid spray parameters. The temperature and pressure of inert gas and fuel inside a CVC are set to exemplify engine conditions. The chamber temperature is ranged between 873 to 973K, the chamber pressure is increased from 5 to 7 MPa and the injection pressure is changed between 50 to 150 MPa. Four fuels are selected in order to shed light upon the effects of fuel properties on spray and jet parameters. As a part of this evaluation, n dodecane and two types of bio-diesel fuel generations, RME (1st Generation) and HVO (2nd Generation) are dissected to expand their influences on mixture formation, which can be connected to the emission production in diesel engines. Finally, two large-scale injector nozzles with an outlet hole diameter of 300 µm (cylindrical and conical nozzles) cover the effects of geometry parameters on spray and jet development. The results accentuate the fact that the liquid spray parameters are effectively fuel-dependent, while total jet parameters are mostly affected by nozzle geometry. Liquid spray length is varied from n-dodecane as a low-boiling fuel to RME as a less-volatile fuel. The conical nozzle results in less cavitation, which is effectively influential on liquid spray and total jet penetrations.
- PreprintOpen AccessOptical Measurements of Two Cylindrical and Conical Heavy-Duty Diesel Injector Nozzels – A Comparison of Reference Diesel, HVO, and RME Fuels(Elsevier, 2023-06-01) Fajri, Hamidreza; Riess, Sebastian; Clemente Mallada, Rafael; Ruoff, Ilona; Wensing, MichaelIn the present work, a constant volume/constant pressure combustion chamber was utilized to measure the major parameters of vapour/liquid phases and combustion of fuel spray introduced by two heavy-duty, large-diameter diesel injectors. A conical nozzle (K factor 4) with a 300 µm outlet diameter and a 1300 µm length and a cylindrical nozzle (K factor 0) with a 300 µm outlet diameter and a 650 µm length display notably distinct impacts on the formation of the mixture, the air entrainment process, and the combustion behaviour. Several parameters, including jet and liquid spray penetration lengths, jet cone angle, and jet projected area are combined to describe spray characteristics in an inert background gas, and ignition incidence, flame lift-off length, and soot appearance time parameters are calculated to explain the combustion trend of the nozzles in a reactive gas background. These observations were conducted using four distinct optical techniques, two of Schlieren and Mie-Scattering for inert condition, and two of OH* Chemiluminescence and Natural Flame Luminosity for reactive condition. This study collects a wide variety of boundary conditions, comprising ambient pressure/temperature and fuel rail pressure, with diverse fuels, including reference diesel, HVO, and RME fuels. The findings are categorized by the jet and liquid spray behaviours in the first part, demonstrating that the jet cone angle is an injector geometry-based parameter and that the cylindrical nozzle has a larger cone angle and a shorter liquid spray length. Three distinct scenarios of the computed air-to-fuel mass ratio at the same axial positions, time steps, and injected fuel mass indicate a marginally higher air entrainment in the cylindrical nozzle and, presumably, a faster and better mixture formation. The thorough examination of the combustion characteristics indicates that a quicker formation of the initial stoichiometric region in the cylindrical nozzle results in a shorter ignition delay at most experimental locations, which ultimately results in a faster soot initiation. However, depending on the flame lift-off length and the boundary conditions, the interval between ignition and the development of soot varies considerably. Finally, it is demonstrated that, with the exception of fuel pressure variation, the flame lift-off length, which is crucial in providing the initial mixture formation prior to combustion occurrence, follows the ignition delay trend and exhibits a shorter length with a shorter ignition delay.
- Doctoral thesisOpen AccessDeep Learning and Image Processing for Stroke Diagnosis with Computed Tomography Angiography(2023) Thamm, Florian; Maier, AndreasComputed Tomography (CT) is an effective tool, especially in acute situations, which contributes significantly to diagnosing a wide variety of clinical pictures. This includes, with a high prevalence, ischemic stroke, in which an oxygen deficiency is caused by the occlusion of a vessel, e.g., by a thrombus. Typically, an interventional thrombectomy is performed, where the thrombus is surgically removed. Yet, this technically demanding procedure requires precise planning. The vessel pathway must be known, as well as the position of the occlusion. Insight is provided by administering a contrast agent, which makes it possible to highlight blood vessels with high contrast. This type of imaging is called CT angiography (CTA). However, the vascular tree with many branches is complex, and concrete path planning is tedious and time-consuming. In this work, we present methods that use image processing and deep learning to process the CTA data in such a way that (1) the patient's vascular tree can be interactively explored, (2) vascular occlusions can be automatically and coarsely localized, and (3) the information gain from the available data is increased by enhancing soft tissue contrasts. First, we introduce ``VirtualDSA++'', an image processing pipeline for CTA head images used in many ways in the present work. In the first step of the pipeline, seed points are automatically determined for an initial segmentation of blood vessels through region growing. With the help of a vessel atlas, relevant blood vessels are located in the segmentation mask and, for a better overview, marked accordingly in the volume or in the later render view. The next step extends the segmentation to include distal arteries and veins. Subsequently, the vascular mask is skeletonized, and a surface model is computed on which different ways of interaction are possible. In this work, two concrete examples of interactive use were elaborated. First, the shortest paths between two points can be planned and visualized. Second, sub-trees at a certain geodetic distance from a reference point can be interactively hidden from the visualization. The latter makes it possible to restrict the visualization to arteries, which are usually of higher relevance than veins during the diagnosis of strokes. The segmentation of blood vessels, calculated by VirtualDSA++, is also used in the automated detection of large vessel occlusions (LVO). Thrombi lead to a blood flow stop in the affected blood vessel, including contrast agents. In CTA, therefore, affected vessels appear interrupted in their course and, thus, in their segmentation. Using 3-dimensional Convolutional Neural Networks (CNN), an affected vessel tree can hence be classified for the presence of an LVO. However, our experiments have shown that this requires a large number of datasets. Thus, in our initial work, we show that we can significantly increase classification accuracy by aggressively deforming the blood vessel segmentations. Since the deformation is only done on the vessel tree and not in the original scan, deformed trees still represent realistic cases since the vessel trees differ from patient to patient in the actual vessel course anyway. In a follow-up work, we additionally introduced the recombination of vascular trees. The vessel trees are first divided into sub-regions, which are then recombined between patients. With the data obtained, an increase in classification accuracy was again achieved. Dose and reconstruction in CTA scans are designed to highlight vessels. A loss of soft tissue contrast accompanies this. The boundary between the gray and white matter of the brain becomes blurred, allowing only limited views of brain tissue. In our work `SyNCCT'', we present a method that synthesizes non-contrast CT scans from CTA scans while using only one energy level in the CTA scan. The method consists of a GAN-based CNN that, in addition to a segmentation from VirtualDSA++ of the blood vessels to be removed, also receives additional prior knowledge in the form of a statistical estimate of the target image. Quantitatively, the proposed approach was superior to existing methodologies. In a Turing test with physicians, the realism of the images could also be confirmed. In the present work, it was shown that vascular tree extraction enables various applications. Through deep learning, image processing, and their combination, we were able to contribute to modern stroke diagnostics.
- Doctoral thesisOpen AccessELASTIC NONWOVENS PRODUCED FROM ETHER BASED THERMOPLASTIC POLYURETHANES BY MELT BLOWING(2023) Woelfel, Bastian; Schubert, Dirk W.Abstract Thermoplastic Polyurethanes (TPUs) are polymers in the class of thermoplastic elastomers, which combine the desirable properties of an elastomer, while retaining the processability of thermoplasts. TPUs are multi-block copolymers consisting of a) crystallizable hard segments (HS) and b) elastic soft segments (SS). The tunability of TPUs’ basic properties through the HS ratio marks the main advantage over common thermoplastic polymers. In combination with the pronounced low-temperature flexibility and enhanced resistance to hydrolysis, an ether-based TPU forms the ideal platform for this investigation. The goal is to translate general TPU raw material correlations, considering different states of matter, into meltblown processing and resulting product performance for a holistic yield, including recycling behavior. Therefore the full range of commercially available Shore hardness was investigated and the influence of HS content on density, molar mass (MW), tensile performance, crystallization behavior, resulting microstructure, and degradation processes was quantified on raw material level. Several cross-correlations between different analysis methods were drawn, in turn creating new analytic prospects for TPUs and enabling economization of measurement times. Nonwovens (NW) in general, and especially when manufactured by melt blowing (MB), offer advantages over conventional elastic fabrics, primarily towards sustainability and scalability. During MB processing, elevated temperatures and deformation processes were found to be prevalent and their impact on TPU raw materials of different HS ratios investigated. In addition to HS ratio, MW was found to have a major influence on product performance and processing conditions. The quantification of MW impact was realized through a recycling study. A two-step degradation process was detected: HS degrade before SS, which in turn affects structure formation over various recycling steps. It was shown and motivated that for the recycled materials ηTPU~MW² (not MW3.4) is true. As viscosity was found to be the governing factor for MB processing, an extensive rheological study was conducted, yielding a comprehensive model describing zero shear viscosity as a function of HS ratio and temperature. This enabled a transfer of the raw material level into the processing level and parameter sets for the MB of TPUs were calculated. By correlating the results of the obtained NWs to self-adhesion experiments, it was revealed that (1) tensile strength is mainly dependent on fiber-to-fiber adhesion and (2) tear strength on single-fiber strength. This learning was additionally verified by calendering experiments. Described fiber properties are inherently dependent on HS ratio and MW. Accordingly, trials spanning the MB parameter space were conducted with an elastic TPU grade. An optimal parameter set achieving NW of high tear strength, whilst retaining high tensile strength was revealed. Interestingly, the optimal NW molecular constitution was found to be similar to the highest performing NW obtained from the recycling trails. This leads to the conclusion that MW ≈ 63 kg/mol features the optimum combination of fiber strength and adhesion properties for an elastic TPU material containing 33 wt% of HS.
- Doctoral thesisOpen AccessModeling and Simulation of Three-Dimensional Vehicular Ad Hoc Networks(2023) Brummer, Alexander; German, ReinhardVehicle-to-Everything (V2X) communication is a promising technology serving as an important element towards improved safety and efficiency in road traffic. The packet-based simulation of Vehicular Ad Hoc Networks (VANETs) is an indispensable tool for research and development in this field. In the past, various models were presented, which increased the degree of realism achievable with such simulations. Nevertheless, state-of-the-art simulators usually assume a two-dimensional environment only, so that vehicular networks based on many real-world traffic situations cannot be investigated properly. In this thesis, we present extensions and propagation models that facilitate the simulation of three-dimensional VANETs. To deal with obstructions caused by other vehicles and the surrounding terrain, we first describe the implementation of an environmental diffraction model. Also in the absence of obstacles in the direct signal path, adapted considerations are required. Therefore, we introduce an n-ray ground interference model, which can be seen as a generalization of the widely-used two-ray interference model. Furthermore, we add the support of communication across multiple road levels by incorporating a floor attenuation model, and also examine the special case of multi-story parking garages, complemented by the inclusion of 3D antenna patterns. The analyses of several example scenarios show that the presented 3D considerations can lead to significantly differing simulation results in comparison with the conventional simulation approach. Large deviations in the resulting path loss can be observed, which may lead to different, mostly decreased success rates of packet transmissions. As a consequence, the conclusions drawn from simulations with either setup may vary widely. To evaluate our approach, we further tested all of the presented propagation models with the help of extensive measurement studies, which show that the models' outputs are in good agreement with the observed attenuation. Finally, we describe how the available models can be combined for the application in arbitrary, large-scale scenarios by describing a model selector. Furthermore, as the increased level of detail leads to a higher computational complexity, we present several methods to improve the simulation performance.
- Doctoral thesisOpen AccessDirekte numerische Simulation der Fällung von organischen Nanopartikeln(2023) Schikarski, Tobias; Peukert, WolfgangDie Keimbildung und das Wachstum von (Nano-) Partikeln sind die primären Feststoffbildungsmechanismen in der Fällung aus der Flüssigphase. Die Übersättigung schreibt die Keimbildung und das Wachstum vor und kontrolliert somit den Verlauf der Feststoffbildung, wenn eine hinreichende Stabilisierung gegen sekundäre Feststoffbildungsprozesse wie Agglomeration, Aggregation oder Reifung vorliegt. Das Mischen der Ausgangslösungen beeinflusst wiederum den Übersättigungsaufbau, weswegen das Fällungsprodukt mischkontrolliert sein kann. Zwei zentrale Fragen dieser Arbeit sind, welche Bedingungen für eine Mischkontrolle vorliegen müssen und ob es übergreifende Zusammenhänge zwischen der Mischrate und dem Fällungsprodukt gibt? Diese wichtigen Fragen werden über einen numerischen Ansatz in dieser Arbeit adressiert. Die einzelnen Teilschritte der Fällung erstrecken sich über viele Größenordnungen in Zeit und Raum und sind stark nichtlinear miteinander gekoppelt. Dieser Sachverhalt stellt die quantitative Vorhersage durch Simulationen dieses komplexen Multiskalenprozesses vor eine große Herausforderung. Insbesondere das Abbilden der Partikelgrößenverteilung bei verschiedenen Prozessbedingungen, wie Mischraten, Konzentrationen der Ausgangslösungen, Lösungsmittel und Mischapparaturen, ist unerreicht. Diese Arbeit nimmt sich der Aufgabe an ein physikalisch fundiertes, allgemein anwendbares Simulationskonzept auf makroskopischer Ebene zu entwickeln, welches ermöglicht die Partikelgrößenverteilung gefällter, sehr gut stabilisierter Nanopartikel unter verschiedensten Prozessbedingungen vorherzusagen. Beruhend auf einem gekoppelten Ansatz von Strömungssimulationen mit einer Populationsbilanzgleichung wird anhand der Fällung des schwer löslichen Arzneistoffes Ibuprofen in einem T-Mischer und in einem 3-Einlass Mischer gezeigt, wie die relevanten Teilprozesse akkurat abzubilden sind, sodass eine prädiktive Vorhersage des Fällungsproduktes erzielt werden kann. Hierfür werden drei wichtige Aspekte des Fällungsprozesses betrachtet: der Mischprozess, die Interaktion zwischen Mischen und Feststoffbildung sowie die Feststoffbildung selbst. Im ersten Teil wird mittels direkter numerischer Simulation das Mischverhalten zweier Flüssigkeitsströme für Strömungsbedingungen von laminar bis hoch turbulent untersucht. Zwei Vergleiche liegen hier im Fokus: Zum einen wird die numerisch berechnete Mischzeit gegen die experimentell bestimmte Mischzeit für zwei sich mischende wässrige Lösungen (konstante Fluideigenschaften) quantitativ verglichen, zum anderen wird rein über Simulationen das Mischverhalten nach Ersetzen der einen wässrigen Lösung durch ein organisches Lösungsmittel betrachtet und somit der Einfluss der nichtlinearen Änderung der Viskosität und Dichte mit der Fluidzusammensetzung auf den Mischenvorgang untersucht. Der diffusive Stoffaustausch zwischen zwei mischbaren Flüssigkeiten ist wesentlich langsamer als der viskose Impulstaustausch. In einer Fluidströmung zeigt sich dieser Sachverhalt durch wesentlich kleinere Mischskalen im Vergleich zu den Geschwindigkeitsskalen. Diese kleinsten Mischskalen sind aktuell in Strömungssimulationen nicht auflösbar und müssen modelliert werden, um einen quantitativen Vergleich zu experimentellen Ergebnissen zu erwirken. Hierfür wird im zweiten Teil der Arbeit ein neues Konzept eines effektiven Transportkoeffizienten eingeführt, welcher der Grundidee entspringt, die Damköhler-Zahl entsprechend der experimentellen Bedingungen in den Simulationen korrekt abzubilden. Mit der Grundlage aus den ersten zwei Teilabschnitten wird im dritten Teil auf die quantitative Modellierung der Fällung von Ibuprofen eingegangen. Unter Einbezug von experimentellen Daten werden zunächst unbekannte Stoffparameter von Ibuprofen für die Feststoffbildung bestimmt. In einem weiteren Schritt wird ein Ansatz vorgestellt wie die zunächst unbekannte globale Damköhler-Zahl in den Simulationen abgeschätzt werden kann. Dieser Ansatz ermöglicht es zum ersten Mal die komplette Partikelgrößenverteilung gefällter Nanopartikel für verschiedene Strömungsbedingungen (von laminar bis hoch turbulent), Ausgangskonzentrationen des Wirkstoffes, Lösungsmittel und Mischapparturen quantitativ zu berechnen. Darüber hinaus wird ein möglicherweise universelles Potenzgesetz zwischen der Mischzeit und der mittleren Partikelgröße als Fällungsprodukt aufgezeigt. Abgeschlossen wird die Arbeit durch die Aufdeckung einer dimensionslosen Prozessfunktion für eine mischkontrollierte Fällung, die untermauert, dass die mittlere Partikelgröße beziehungsweise die Partikelgrößenverteilung vornehmlich durch das Verhältnis der Kinetiken der einzelnen Teilprozesse zueinander vorgeschrieben ist und nicht durch deren Absolutwerte.
- Doctoral thesisOpen AccessThe QDAcity-RE-RS Method for Creating Complete, Consistent, and Traceable Requirements Specifications(2023) Mucha, Julia; Riehle, DirkResolving issues that arise in a software project becomes more expensive as the project advances. Hence, it is crucial for a requirements engineer to develop a complete, consistent, and traceable Requirements Specification (RS). To create such a high-quality RS, the requirements engineer must develop a deep understanding of the problem domain. In science, researchers face a similar challenge when building theories. Constructing a theory requires considering all related resources to capture a broad diversity of characteristics. In addition, researchers must document the theory-building process to demonstrate high quality and ensure traceability for other researchers who want to build on the theory. Therefore, researchers developed methods for theory building, from which we draw inspiration for our research. In this dissertation, we present the development and evaluation of the tool-supported QDAcity- RE-RS method, which utilizes Qualitative Data Analysis (QDA). Thus this method supports the analysis of stakeholder information, such as interview transcripts to create natural language RS. We followed the design science process proposed by Peffers et al. [139], which focuses on developing and evaluating an innovative artifact to solve a clearly defined problem. To identify the problem, we performed a Systematic Literature Review (SLR) and conducted an exploratory qualitative survey with 14 Requirements Engineering (RE) experts from the industry. Based on the findings, we developed the QDAcity-RE-RS method and implemented its tool-support. To demonstrate the method, we conducted a case study and triangulated it using a focus group session with 38 RE experts from the industry. Subsequently, we evaluated the artifact by using two approaches. Firstly, we evaluated the tool-support using a heuristic evaluation, which uncovered five usability issues and nine additional improvements. Secondly, we evaluated the QDAcity-RE-RS method by performing an evaluatory case study in an industry project. This study reveals that the method supports various activities within the RE process, such as documenting decisions and preparing stakeholder meetings. The RS resulting from the QDAcity-RE-RS method was complete concerning the analyzed stakeholder information but lacked requirements on topics that were not explicitly discussed, such as requirements for the backend. Notably, the RS demonstrated a strong suit in maintaining consistent usage of technical terms, whereas the requirements in the industry project exhibited numerous variations of technical terminology. The pre-Requirements Specification Traceability (RST) of the requirements to their origin and thus to decisions was found to be very helpful by the requirements engineer of the industry project, as it facilitates the transfer of information to other requirements engineers. The resulting tool-supported QDAcity-RE-RS method facilitates the creation of a natural language RS, enables the documentation of the creation process, and thereby realizes pre-RST. By linking requirements to their origin, the method allows for assessing completeness and identifying inconsistencies.
- Doctoral thesisOpen AccessPredicting Materials Deformation and Failure by Machine Learning(2023) Hiemer, Stefan; Zaiser, MichaelMaterial failure has significant impact on society by harming people, damaging infrastructure and property, and disrupting daily life and economic activities. The deformation and failure of materials has been investigated in this thesis by the means of machine learning methods with special attention for the effects of disorder on the emerging macroscopic behaviour. Advances were provided in three specific areas of research: i) the localization of plasticity in atomistic, simulated glasses ii) the performance assessment of default machine learning approaches for the description of dislocation based plasticity modeling in fcc metals iii) the connection of specimen lifetime to the time series of damage evolution during creep. For the prediction of localized plastic events in two-dimensional silica glasses, a set of atom centered, on symmetry considerations based features and a straightforward image recognition were proposed. The symmetry features in connection with of support vector classifiers identify a sub-group of ~4 % of all atoms that carries ~99 %of particles involved in breaking the first bond in their respective sample. The image recognition approach by means of an artificial neural network predicts the strain of rupture, the crack path and the location of the first bond break with reasonable accuracy and is made qualitatively interpretable by attention maps revealed by Gradient-weighted Class Activation Mapping. Correlations between defects and atomic potential energies to these attention maps are revealed. Future effort will be directed towards a fusion of both methods to yield theoretically consistent, accurate machine learning methods and the construction of quantitatively interpretable models. Intended as verification of a proposed research paradigm change and a simple benchmark problem for dislocation based plasticity, the inference of flow stress from experimental and simulated data of fcc metals by a theoretically motivated scaling relationship was compared to established machine learning methods. When extrapolating from simulated to experimental conditions machine learning showed worse performance compared to the scaling law. This indicates the need to propose machine learning methods which i) incorporate the symmetries of the physics underlying dislocation motion (or the behaviour of defects in general) ii) show correct asymptotic behaviour as far as can be inferred by theory. For the prediction of sample lifetimes of disordered materials two scientific contributions have been made. First, feature importance assessment with a special machine learning technique called random forests trained on artificial data of brittle and semi-brittle disordered systems indicated that strain-related features were important for the models to make an accurate prediction as opposed to rate-based features which are suggested in the literature. Second, for a fiber bundle model of stochastic, thermally activated creep a closed form expression for the lifetime distribution has been derived for the special case of homogeneous fiber bundles. Lifetime average and variance have been presented with exact and asymptotic solutions. Both contributions are intended to help other researchers to infer accurate models from data with proper initial guesses regarding functional form and variable selection or in the case of Bayesian models well suited prior probability distributions.
- Doctoral thesisOpen AccessDeep Learning based Radio Resource Management in Cellular Vehicular Communication(2023) Bhadauria, Shubhangi; Fischer, GeorgAbstract The C-V2X sidelink communication was introduced by the 3rd generation partnership project (3GPP) in Long Term Evolution (LTE) to pave the way for future intelligent transport so-lutions. The vision of C-V2X sidelink communication is to support a diversified range of use cases, e.g., advanced driving and collision avoidance, with stringent quality of service (QoS) requirements. The QoS requirements vary from ultra-reliable low latency to high data rates de-pending on the supported application. Radio resource management (RRM) is vital to achieve the required QoS requirements in C-V2X sidelink communication due to the limited spec-trum availability. RRM becomes challenging in C-V2X sidelink communication due to the high mobility and dynamic traffic pattern of the Vehicle User Equipment (V-UEs) on top of the limited spectrum availability. Therefore, this proposes an intelligent RRM approach taking into account the dynamic behavior of the C-V2X ecosystem and meeting the required QoS requirements in terms of latency and reliability. Two problems under C-V2X RRM are discussed in the thesis. First, the radio resource allocation mechanism for V-UEs in unicast C-V2X sidelink communication is investigated. The proposed solution is a decentralized QoS-based DRL radio resource allocation approach. As the proposed approach is decentralized, it is assumed that each V-UE maintaining one unicast link at a time is acting as an agent. Hence, the intelligence is assumed to be embedded at the V-UE side. The Mode 2 in-coverage scenario consists of a Vehicle to Network (V2N) and Vehicle to Vehicle (V2V) links. The QoS parameter incorporated in the proposed scheme is the independent QoS parameter, i.e., the priority associated with each C-V2X message. The priority reflects the allowed latency budget within which a V-UE has to transmit a packet to meet the latency requirements of the sup-ported C-V2X application. The goal of the V-UE agent is to meet the latency constraints of V2V links associated with the respective priority while maximizing the throughput of all V2N links. A performance evaluation of the proposed approach is accomplished based on system-level simulations for both urban and highway scenarios. The results show that incorporating the QoS parameter (i.e., priority) in the DRL-based resource allocation is crucial for the V-UE agent to meet the latency requirements pertaining to different C-V2X applications. The proposed QoS-based DRL radio resource allocation approach is further analyzed for the scenario where V-UE can support multiple unicast links simultaneously. Performance compar-ison to evaluate the impact of multiple services is conducted through a single-agent reinforce-ment learning (SARL) and a multi-agent reinforcement learning (MARL) approach. Addition-ally, the QoS parameter considered here is the latency and the relative distance between the V-UEs with the established unicast link, which is mapped to the priority to reflect the packet delay budget (PDB). This scenario also assumes V2N and V2V links in unicast communication in an urban setting. The goal of the V-UE agent here also is to meet the latency constraints of V2V links associated with the respective priority while maximizing the throughput of all V2N links. System-level simulation-based results indicate that the MARL achieves a higher V2N throughput for single and multiple services support than SARL. However, in meeting the latency constraints, SARL performs better for multiple service support per V-UE. Also, it can be concluded that overall, in the case of multiple service support per V-UE, the probability of meeting the latency constraint by both SARL and MARL is reduced. The second problem investigated in the thesis is a DRL-based congestion control approach for V-UEs experiencing high channel load and hence performance degradation in achieving the required QoS requirements. The DRL-based congestion control approach is formulated for a unified and location-based segregated resource pool. The scenario in consideration consists of V-UEs in dynamic groupcast communication and mode 2 in coverage. The intelligence is as-sumed at the base station; therefore, the formulated approach is a centralized DRL congestion control. A performance evaluation of the algorithm is conducted for periodic and aperiodic traffic models in a realistic mobility scenario generated in the Simulation of Urban Mobil-ity (SUMO) platform. The simulation results show that the proposed DRL-based congestion control approach achieves the packet reception ratio (PRR) per the packet’s associated QoS irrespective of resource pool configuration. However, achieved PRR with a DRL-based conges-tion control approach is better for periodic traffic than aperiodic traffic. The DRL agent can maintain the average measured Channel Busy Ratio (CBR) below 0.65 irrespective of resource pool configuration.
- Doctoral thesisOpen AccessActive Deep Learning of Representations for Similarity Search(2023) Löffler, Christoffer; Eskofier, BjörnIn recent years Deep Learning has revolutionized many fields in computer science such as Computer Vision (CV), Natural Language Processing (NLP), and Information Retrieval (IR). For example, modern DL is at the core of systems that process large amounts of complex data, such as images, video or text, and then retrieve information or even generate semantically meaningful responses to queries within fractions of a second. However, the (historic) data that is available for many potential applications brings new problems that require human attention. The cleaning, preparing and annotation of data can evolve into a similar issue as the search of a needle in a haystack: take too much time, and increase costs. Some data may even never be annotated in sufficient detail and thus require alternative solutions. In this cumulative dissertation, I address gaps in the literature at three levels of the Machine Learning process, that enable modeling of complex data and reduce cost of annotations. The first objective considers issues with the retrieval of complex, unstructured and sparsely annotated data from large (historic) databases. We proposed a metric learner [P1] that learns a lower-dimensional representation of the data and thus enables efficient Information Retrieval. It jointly estimates a structure of the unstructured data and learns pairwise similarity, such that a previously prohibitive distance metric can be calculated orders of magnitudes faster. The second objective consists of a unified Deep Active Learning (DAL) policy that reduces the annotation costs in Deep Learning via the use of Active Learning. We propose Imitat- ing Active Learner Ensembles (IALE) [P2], an Imitation Learning approach to DAL that leverages a learner Deep Neural Network (DNN)’s state and uses different signals to learn how to learn actively from multiple heuristics. Our method then acquires more informative samples than any of these baseline heuristic. The third objective considers cost-efficient learning of individual similarity functions, in cases where Machine Learning models for Information Retrieval suffer from a semantic gap in the problem domain. In [P3], we propose to learn similarity from few annotated samples by combining fine-tuning of large pre-trained models with Active Learning sampling methods to reduce cost. We present a user study that demonstrates the strong benefits of the sampling method w.r.t. both cost and query difficulty. To summarize, we contribute to Metric Learning, Deep Active Learning and Information Retrieval, and show adaptive similarity search for unstructured data. Hence, our work facilitates the efficient use of annotators’ time and the collection of high quality annotations, even on highly complex data.