Journal articles: 'Histogramme de forces' – Grafiati (2024)

  • Bibliography
  • Subscribe
  • News
  • Referencing guides Blog Automated transliteration Relevant bibliographies by topics

Log in

Українська Français Italiano Español Polski Português Deutsch

We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!

Relevant bibliographies by topics / Histogramme de forces / Journal articles

To see the other types of publications on this topic, follow the link: Histogramme de forces.

Author: Grafiati

Published: 25 May 2024

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Consult the top 50 journal articles for your research on the topic 'Histogramme de forces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Revelly, JP, F.Feihl, T.Liebling, and C.Perret. "Time constant histograms from the forced expired volume signal: a clinical evaluation." European Respiratory Journal 2, no.6 (June1, 1989): 536–42. http://dx.doi.org/10.1183/09031936.93.02060536.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

We evaluated a multicompartment analysis of forced expiration, based on modelling the lung as a set of twenty parallel compartments emptying exponentially with time constants ranging from 0.1-10 s; the forced expired volume signal was represented by a histogram showing the fraction of forced vital capacity as a function of compartmental time constants. We applied this technique to 80 healthy and 12 asthmatic subjects. The histograms computed from three consecutive forced expirations were poorly reproducible in 18 of the 80 healthy and 2 of the 12 asthmatic subjects. In the asthmatics, the time constant histograms conveyed no additional information on bronchial obstruction, beyond that already present in standard spirometric indices. A simulation study showed a high sensitivity of the histograms to the truncation of the terminal part of forced expiration. We conclude that the usefulness of the time constant histogram technique appears doubtful.

2

Su, Lian Cheng, YanE.Shi, and Xiao Li Li. "Fault Diagnosis of Bearing Based on the Ultrasonic Amplitude Histogram." Advanced Materials Research 479-481 (February 2012): 1361–64. http://dx.doi.org/10.4028/www.scientific.net/amr.479-481.1361.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The bearing of rotating machine often fails due to the frictional forces of rolling element, such frictional forces often generate a series of ultrasound, and that is to say, there must be some necessary relationships between the faults of rotating machinery equipment and the ultrasound from the equipment. In this paper, the histogram method which is the simplest method in statistics is used to analyze the ultrasonic signals. Through the analysis, the bearing faults could be preliminarily diagnosed. A verification research with respect to vibration method is carried out to analyze the effectiveness and superiority of ultrasonic method by using the histogram. Extensive experiments have been performed in a bearing vibration measuring instrument, and the results indicated that the fault diagnosis method based on the ultrasonic amplitude histogram is more useful than the method based on the vibration amplitude histogram. In addition, it is more sensitive and more effective than the method based on vibration amplitude histogram.

3

Gurung,RamB., Tony Lindgren, and Henrik Boström. "Learning Random Forest from Histogram Data Using Split Specific Axis Rotation." International Journal of Machine Learning and Computing 8, no.1 (February 2018): 74–79. http://dx.doi.org/10.18178/ijmlc.2018.8.1.666.

Full text

APA, Harvard, Vancouver, ISO, and other styles

4

Buck,AndrewR., JamesM.Keller, and Marjorie Skubic. "A Memetic Algorithm for Matching Spatial Configurations With the Histograms of Forces." IEEE Transactions on Evolutionary Computation 17, no.4 (August 2013): 588–604. http://dx.doi.org/10.1109/tevc.2012.2226889.

Full text

APA, Harvard, Vancouver, ISO, and other styles

5

Grepl, Jan, Karel Frydrýšek, and Marek Penhaker. "A Probabilistic Model of the Interaction between a Sitting Man and a Seat." Applied Mechanics and Materials 684 (October 2014): 413–19. http://dx.doi.org/10.4028/www.scientific.net/amm.684.413.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

This article focuses on the biomechanical evaluation of the interaction between load forces to which a sitting man and the seat are mutually exposed. The load forces, which consider actual dispersion in the human population through histograms, are determined using a probabilistic method known as the Simulation-Based Reliability Assessment (SBRA). A simple flat model shows a basic and high-quality stochastic evaluation of all the forces that affect the man and the seat. The results and methodology can be used in many areas of biomechanics, ergonomics or industrial design.

6

Ni, Jingbo, and Pascal Matsakis. "An equivalent definition of the histogram of forces: Theoretical and algorithmic implications." Pattern Recognition 43, no.4 (April 2010): 1607–17. http://dx.doi.org/10.1016/j.patcog.2009.09.020.

Full text

APA, Harvard, Vancouver, ISO, and other styles

7

Chapple,WilliamD. "Regulation of Muscle Stiffness During Periodic Length Changes in the Isolated Abdomen of the Hermit Crab." Journal of Neurophysiology 78, no.3 (September1, 1997): 1491–503. http://dx.doi.org/10.1152/jn.1997.78.3.1491.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Chapple, William. Regulation of muscle stiffness during periodic length changes in the isolated abdomen of the hermit crab. J. Neurophysiol. 78: 1491–1503, 1997. Reflex activation of the ventral superficial muscles (VSM) in the abdomen of the hermit crab, Pagurus pollicarus, was studied using sinusoidal and stochastic longitudinal vibration of the muscle while recording the length and force of the muscle and the spike times of three exciter motoneurons. In the absence of vibration, the interspike interval histograms of the two larger motoneurons were bimodal; cutting sensory nerves containing most of the mechanoreceptor input removed the short interval peak in the histogram, indicating that the receptors are important in maintaining tonic firing. Vibration of the muscle evoked a reflex increase in motoneuron frequency that habituated after an initial peak but remained above control levels for the duration of stimulation. Motoneuron frequency increased with root mean square (rms) stimulus amplitude. Average stiffness during stimulation was about two times the stiffness of passive muscle. The reflex did not alter muscle dynamics. Estimated transfer functions were calculated from the fast Fourier transform of length and force signals. Coherence was >0.9 for the frequency range of 3–35 Hz. Stiffness magnitude gradually increased over this range in both reflex activated and passive muscle; phase was between 10 and 20°. Reflex stiffness decreased with increasing stimulus amplitudes, but at larger amplitudes, this decrease was much less pronounced; in this range stiffness was regulated by the reflex. The sinusoidal frequency at which reflex bursts were elicited was ∼6 Hz, consistent with previous measurements using ramp stretch. During reflex excitation, there was an increase in amplitude of the short interval peak in the interspike interval histogram; this was reduced when the majority of afferent pathways was removed. A phase histogram of motoneuron firing during sinusoidal vibration had a peak at ∼110 ms, also suggesting that an important component of the reflex is via direct projections from the mechanoreceptors. These results are consistent with the hypothesis that a robust feedforward regulation of abdominal stiffness during continuous disturbances is achieved by mechanoreceptors signalling the absolute value of changing forces; habituation of the reflex, its high-threshold for low frequency disturbances and the activation kinetics of the muscle further modify reflex dynamics.

8

Wendling, Laurent, Salvatore Tabbone, and Pascal Matsakis. "Fast and robust recognition of orbit and sinus drawings using histograms of forces." Pattern Recognition Letters 23, no.14 (December 2002): 1687–93. http://dx.doi.org/10.1016/s0167-8655(02)00131-9.

Full text

APA, Harvard, Vancouver, ISO, and other styles

9

Lelièvre, Tony, Lise Maurin, and Pierre Monmarché. "The adaptive biasing force algorithm with non-conservative forces and related topics." ESAIM: Mathematical Modelling and Numerical Analysis 56, no.2 (February28, 2022): 529–64. http://dx.doi.org/10.1051/m2an/2022010.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

We propose a study of the Adaptive Biasing Force method’s robustness under generic (possibly non-conservative) forces. We first ensure the flat histogram property is satisfied in all cases. We then introduce a fixed point problem yielding the existence of a stationary state for both the Adaptive Biasing Force and Projected Adapted Biasing Force algorithms, relying on generic bounds on the invariant probability measures of hom*ogeneous diffusions. Using classical entropy techniques, we prove the exponential convergence of both biasing force and law as time goes to infinity, for both the Adaptive Biasing Force and the Projected Adaptive Biasing Force methods.

10

Karel, Frydrýšek, Čepica Daniel, and Halo Tomáš. "Stochastic Loading of a Sitting Human." Strojnícky časopis - Journal of Mechanical Engineering 69, no.2 (June1, 2019): 97–110. http://dx.doi.org/10.2478/scjme-2019-0020.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

AbstractThe aim of this paper is the biomechanical evaluation of the interaction between load forces to which a sitting man and the seat are exposed. All loads, which consider actual anthropometry histograms of human population (i.e. segmentation of human weight, height, centroids, gravity and shape of seat) are determined using the direct Monte Carlo Method. All inputs are based on the theory of probability (i.e. random/probabilistic inputs and outputs with respect their variabilities). A simple plane model (i.e. probabilistic normal forces and bending moments) shows a sufficient stochastic/probabilistic evaluation connected with biomechanics, ergonomics, medical engineering (implants, rehabilitation, traumatology, orthopaedics, surgery etc.) or industrial design.

11

Vogt, Barbara, Kathinka Deuß, Victoria Hennig, Zhanqi Zhao, Ingmar Lautenschläger, Norbert Weiler, and Inéz Frerichs. "Regional lung function in nonsmokers and asymptomatic current and former smokers." ERJ Open Research 5, no.3 (July 2019): 00240–2018. http://dx.doi.org/10.1183/23120541.00240-2018.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Electrical impedance tomography (EIT) is able to detect rapid lung volume changes during breathing. The aim of our observational study was to characterise the heterogeneity of regional ventilation distribution in lung-healthy adults by EIT and to detect the possible impact of tobacco consumption.A total of 219 nonsmokers, asymptomatic ex-smokers and current smokers were examined during forced full expiration using EIT. Forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and FEV1/FVC were determined in 836 EIT image pixels for the analysis of spatial and temporal ventilation distribution. Coefficients of variation (CVs) of these pixel values were calculated. Histograms and medians of FEV1/FVCEIT and times required to exhale 50%, 75%, 90% of FVCEIT (t50, t75 and t90) were generated.CV of FEV1/FVCEIT distinguished among all groups (mean±sd: nonsmokers 0.43±0.05, ex-smokers 0.52±0.09, smokers 0.62±0.16). Histograms of FEV1/FVCEIT differentiated between nonsmokers and the other groups (p<0.0001). Medians of t50, t75 and t90 showed the lowest values in nonsmokers. Median t90 separated all groups (median (interquartile range): nonsmokers 0.82 (0.67–1.15), ex-smokers 1.41 (1.03–2.21), smokers 1.91 (1.33–3.53)).EIT detects regional ventilation heterogeneity during forced expiration in healthy nonsmokers and its increase in asymptomatic former and current smokers. Therefore, EIT-derived reference values should only be collected from nonsmoking lung-healthy adults.

12

Hartley,P.G., J.R.Galvin, G.W.Hunninghake, J.A.Merchant, S.J.Yagla, S.B.Speakman, and D.A.Schwartz. "High-resolution CT-derived measures of lung density are valid indexes of interstitial lung disease." Journal of Applied Physiology 76, no.1 (January1, 1994): 271–77. http://dx.doi.org/10.1152/jappl.1994.76.1.271.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

To assess the validity of computer-assisted methods in analyzing the lung parenchyma imaged with high-resolution computed tomography (HRCT), we compared computer-derived estimates of lung density to other, more traditional, measures of parenchymal injury in 24 subjects with idiopathic pulmonary fibrosis (IPF) and 60 subjects with extensive occupational exposure to asbestos. Gray scale density histograms were constructed from the HRCT images. The gray scale histogram of both study groups was of a skewed unimodal distribution. However, compared with the asbestos-exposed subjects, the patients with IPF had a gray scale distribution that was significantly shifted to the right (greater density) and flatter. In a multivariate analysis, after controlling for age and cigarette smoking, we found that the mean and median gray scale densities were independently associated with the presence of moderate-to-severe dyspnea, a higher International Labour Office chest X-ray category, a lower forced vital capacity, and a higher concentration of macrophages and eosinophils in the bronchoalveolar lavage fluid. These factors accounted for > 70% of the variance of the mean and median gray scale densities. Interestingly, no differences in gray scale density measures were noted between patients with IPF and patients with asbestosis when these other factors were taken into account. Our results suggest that computer-derived density analysis of the lung parenchyma on the HRCT scan is a valid, clinically meaningful, and objective measure of interstitial lung disease.

13

Maggioni,V., E.N.Anagnostou, and R.H.Reichle. "The impact of model and rainfall forcing errors on characterizing soil moisture uncertainty in land surface modeling." Hydrology and Earth System Sciences 16, no.10 (October4, 2012): 3499–515. http://dx.doi.org/10.5194/hess-16-3499-2012.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract. The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

14

Son, Jung Eun, Byoung Chul Ko, and Jae Yeal Nam. "Medical Image Classification and Retrieval Using BoF Feature Histogram with Random Forest Classifier." KIPS Transactions on Software and Data Engineering 2, no.4 (April30, 2013): 273–80. http://dx.doi.org/10.3745/ktsde.2013.2.4.273.

Full text

APA, Harvard, Vancouver, ISO, and other styles

15

Wang, Wei, Yuling Song, Jun Chen, and Shuaibing Shi. "Dynamic reliability prediction of vehicular suspension structure with damping random uncertainties." Journal of Vibration and Control 25, no.3 (July26, 2018): 549–58. http://dx.doi.org/10.1177/1077546318788710.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

This research is conducted such that a two degrees of freedom nonlinear stochastic vibration model of vehicular suspension structure with random bilinear damping forces and cubic nonlinear restoring forces is established. Based on extreme value theory (EVT) and Monte Carlo integration method (MCIM), a novel method of predicting the failure probabilities of the maximum amplitude responses of suspension structure is proposed. The extreme value distribution functions of amplitude responses, which are analytical expressed by underlying amplitude distribution functions, are acquired by using EVT. In order to obtain the analytical expressions of the underlying distribution functions, histograms are used to estimate their distribution forms. Under the log-normal distribution assumption, the means and standard deviations of the underlying distributions are calculated by using MCIM, and then by setting up amplitude failure levels, the extreme distribution functions and failure probabilities of amplitude responses are obtained. The Monte Carlo simulation method-based numerical analyses give enormous supports to use of the proposed prediction means.

16

Liu, Qiong, and Jin Wang. "Quantifying the flux as the driving force for nonequilibrium dynamics and thermodynamics in non-Michaelis–Menten enzyme kinetics." Proceedings of the National Academy of Sciences 117, no.2 (December26, 2019): 923–30. http://dx.doi.org/10.1073/pnas.1819572117.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The driving force for active physical and biological systems is determined by both the underlying landscape and nonequilibrium curl flux. While landscape can be experimentally quantified from the histograms of the collected real-time trajectories of the observables, quantifying the experimental flux remains challenging. In this work, we studied the single-molecule enzyme dynamics of horseradish peroxidase with dihydrorhodamine 123 and hydrogen peroxide (H2O2) as substrates. Surprisingly, significant deviations in the kinetics from the conventional Michaelis–Menten reaction rate were observed. Instead of a linear relationship between the inverse of the enzyme kinetic rate and the inverse of substrate concentration, a nonlinear relationship between the two emerged. We identified nonequilibrium flux as the origin of such non-Michaelis–Menten enzyme rate behavior. Furthermore, we quantified the nonequilibrium flux from experimentally obtained fluorescence correlation spectroscopy data and showed this flux to led to the deviations from the Michaelis–Menten kinetics. We also identified and quantified the nonequilibrium thermodynamic driving forces as the chemical potential and entropy production for such non-Michaelis–Menten kinetics. Moreover, through isothermal titration calorimetry measurements, we identified and quantified the origin of both nonequilibrium dynamic and thermodynamic driving forces as the heat absorbed (energy input) into the enzyme reaction system. Furthermore, we showed that the nonequilibrium driving forces led to time irreversibility through the difference between the forward and backward directions in time and high-order correlations were associated with the deviations from Michaelis–Menten kinetics. This study provided a general framework for experimentally quantifying the dynamic and thermodynamic driving forces for nonequilibrium systems.

17

Dow,XimengY., ChristopherM.Dettmar, EmmaL.DeWalt, JustinA.Newman, AlexanderR.Dow, Shatabdi Roy-Chowdhury, JesseD.Coe, Christopher Kupitz, Petra Fromme, and GarthJ.Simpson. "Second harmonic generation correlation spectroscopy for characterizing translationally diffusing protein nanocrystals." Acta Crystallographica Section D Structural Biology 72, no.7 (June23, 2016): 849–59. http://dx.doi.org/10.1107/s205979831600841x.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Second harmonic generation correlation spectroscopy (SHG-CS) is demonstrated as a new approach to protein nanocrystal characterization. A novel line-scanning approach was performed to enable autocorrelation analysis without sample damage from the intense incident beam. An analytical model for autocorrelation was developed, which includes a correction for the optical scattering forces arising when focusing intense, infrared beams. SHG-CS was applied to the analysis of BaTiO3nanoparticles ranging from 200 to ∼500 nm and of photosystem I nanocrystals. A size distribution was recovered for each sample and compared with the size histogram measured by scanning electron microscopy (SEM). Good agreement was observed between the two independent measurements. The intrinsic selectivity of the second-order nonlinear optical process provides SHG-CS with the ability to distinguish well ordered nanocrystals from conglomerates and amorphous aggregates. Combining the recovered distribution of particle diameters with the histogram of measured SHG intensities provides the inherent hyperpolarizability per unit volume of the SHG-active nanoparticles. Simulations suggest that the SHG activity per unit volume is likely to exhibit relatively low sensitivity to the subtle distortions within the lattice that contribute to resolution loss in X-ray diffraction, but high sensitivity to the presence of multi-domain crystals.

18

DZIOPA, Zbigniew, and Krzysztof ZDEB. "Dispersion Analysis of Rounds Fired from a Glauberyt Machine Pistol." Problems of Mechatronics Armament Aviation Safety Engineering 10, no.1 (March31, 2019): 121–34. http://dx.doi.org/10.5604/01.3001.0013.0801.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Within an enclosed shooting range of the EMJOT company, the process of firing one hundred single bullets from a Glauberyt machine pistol was recorded. The empirical test used 9x19 mm FMJ Luger (Parabellum) ammunition manufactured in the Czech Republic in 2017. As the weapon is dedicated to special forces, the shots were fired by an anti-terrorist operative, at a target located 25 m away. In order to determine bullet dispersion, the results of the experiment were subjected to statistical processing. Mean displacement and mean square displacement relative to the mean hit point, histograms, normal distribution, as well as statistical tests and hypotheses were used for estimation. The shots were recorded with a high speed digital camera Phantom v 9.1. The videos recorded were used to determine the initial kinematic parameters of the bullet trajectory.

19

Krasnov,O.G. "Technique to Determine Integral Distribution of Forces Acting on the Railway Track." World of Transport and Transportation 17, no.4 (January15, 2020): 6–21. http://dx.doi.org/10.30932/1992-3252-2019-17-4-6-21.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

A comprehensive assessment of the effectiveness of heavy-haul traffic is a prerequisite for its implementation in world railway sector. The assessment should include the study of the effect of increased axial loads on degradation of the elements of the track’s superstructure, deformation parameters of the ballast layer and the track’s substructure, and of measures intended to reinforce the railway infrastructure accordingly. It is necessary to consider rather accurately the force factors acting on the track and generated by different types of rolling stock with various axial loads. The distribution of force factors for a specific section of a railway track is a multifactorial process, the quantitative parameters of which depend on the structure of the train traffic flow, the type of rolling stock and its share in the total daily train traffic passing through this section, speed limits set for the section, track profile (straight line, curve), technical conditions of rolling stock and elements of the track superstructure.The objective of the work was to develop a technique for determining the integral law of distribution of vertical and lateral forces affecting the track and caused by wheels of different types of rolling stock, depending on the share of the operation of the respective types of rolling stock in the daily train traffic flow. Methods of mathematical statistics were used.In the process of experimental studies of the effects of various types of rolling stock, the statistical distributions of vertical and lateral forces have been determined. The histograms of vertical and lateral forces have been approximated by theoretical laws. Kolmogorov–Smirnov fit criterion has been used to confirm goodness of chosen approximation functions. A technique has been developed for mixed freight and passenger railway traffic that allows to consider the contribution of the share of each type of rolling stock in the force impact on the track when calculating the total impact. The technique is based on real, experimentally established distributions of vertical and lateral forces, traffic speed at the considered section of the track with regard to seasonality (winter, summer), and can be used for the sections where heavyhaul traffic is being implemented.

20

Jun, Zhang, Chang Qingbing, and Ren Zongjin. "Research on a non-linear calibration method for dynamometer." Sensor Review 40, no.2 (March9, 2020): 167–73. http://dx.doi.org/10.1108/sr-07-2019-0181.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Purpose The purpose of this paper is to solve the problem that the relationship between loading forces, which were applied at different positions on a plane, and output values of load-sharing dynamometer is non-linear. Design/methodology/approach First, the analytical model of ISPM (isodynamic surface proportional mapping method) method, which is used to calibrate dynamometer, was established. Then, a series of axial force calibration tests were performed on a load-sharing dynamometer at different loading positions. Finally, according to output values, calibration forces at different loading positions were calculated by ISPM method, and corresponding distribution histogram of calibration force error was generated. Findings The largest error between calculated force and standard force is 2.92 per cent, and the probability of calculated force error within 1 per cent is 91.03 per cent, which verify that the ISPM method is reliable for non-linear calibration of dynamometers. Originality/value The proposed ISPM method can achieve non-linear calibration between measured force and output signal of load-sharing dynamometer at different positions. In addition, ISPM method can also solve some complex non-linear problems, such as prediction of plane cutting force under the influence of multiple parameters, the force measurement of multi-degree-of-freedom platform and so on.

21

ZANIN, MASSIMILIANO, PEDRO CANO, OSCAR CELMA, and JAVIERM.BULDÚ. "PREFERENTIAL ATTACHMENT, AGING AND WEIGHTS IN RECOMMENDATION SYSTEMS." International Journal of Bifurcation and Chaos 19, no.02 (February 2009): 755–63. http://dx.doi.org/10.1142/s0218127409023123.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In the present work, algorithms based on complex network theory are applied to Recommendation Systems in order to improve their quality of predictions. We show how some networks are grown under the influence of trendy forces, and how this can be used to enhance the results of a recommendation system, i.e. increase their percentage of right predictions. After defining a base algorithm, we create recommendation networks which are based on a histogram of user ratings, using therefore an underlying principle of preferential attachment. We show the influence of data aging in the prediction of user habits and how the exact moment of the prediction influences the recommendation. Finally, we design weighted networks that take into account the age of the information used to generate the links. In this way, we obtain a better approximation to evaluate the users' tastes.

22

Zeng, Lv Xian, Wei Dong Hu, and Si Xi Xiao. "Application of Rocks Removal Technologies before Shield-Driving in the Line 9 Guangzhou Metro." Advanced Materials Research 639-640 (January 2013): 251–56. http://dx.doi.org/10.4028/www.scientific.net/amr.639-640.251.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Detailed geological profile and complementary prospecting histogram revealed a series of hard rocks in the Line 9 tunnel of the Guangzhou metro, greatly increasing the difficulty of shield construction. The geological and hydrological condition of the project was very complex and vertical shafts were used to remove the rocks in order to reduce the cost of the project. Reverse construction technology was applied to the construction of vertical shafts. The calculation of internal forces and relevant design of the vertical shaft wall are introduced herein. The vertical shafts and surrounding buildings were monitored and the monitored contents are also described in detail. At the same time, some emergency measures were applied to ensure the safety of the project. The rocks removal technologies before shield-driving in the line 9 of the Guangzhou metro are introduced herein and the results can be referenced to other similar projects.

23

Vogt, Barbara, Zhanqi Zhao, Peter Zabel, Norbert Weiler, and Inéz Frerichs. "Regional lung response to bronchodilator reversibility testing determined by electrical impedance tomography in chronic obstructive pulmonary disease." American Journal of Physiology-Lung Cellular and Molecular Physiology 311, no.1 (July1, 2016): L8—L19. http://dx.doi.org/10.1152/ajplung.00463.2015.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Patients with obstructive lung diseases commonly undergo bronchodilator reversibility testing during examination of their pulmonary function by spirometry. A positive response is defined by an increase in forced expiratory volume in 1 s (FEV1). FEV1 is a rather nonspecific criterion not allowing the regional effects of bronchodilator to be assessed. We employed the imaging technique of electrical impedance tomography (EIT) to visualize the spatial and temporal ventilation distribution in 35 patients with chronic obstructive pulmonary disease at baseline and 5, 10, and 20 min after bronchodilator inhalation. EIT scanning was performed during tidal breathing and forced full expiration maneuver in parallel with spirometry. Ventilation distribution was determined by EIT by calculating the image pixel values of FEV1, forced vital capacity (FVC), tidal volume, peak flow, and mean forced expiratory flow between 25 and 75% of FVC. The global inhom*ogeneity indexes of each measure and histograms of pixel FEV1/FVC values were then determined to assess the bronchodilator effect on spatial ventilation distribution. Temporal ventilation distribution was analyzed from pixel values of times needed to exhale 75 and 90% of pixel FVC. Based on spirometric FEV1, significant bronchodilator response was found in 17 patients. These patients exhibited higher postbronchodilator values of all regional EIT-derived lung function measures in contrast to nonresponders. Ventilation distribution was inhom*ogeneous in both groups. Significant improvements were noted for spatial distribution of pixel FEV1 and tidal volume and temporal distribution in responders. By providing regional data, EIT might increase the diagnostic and prognostic information derived from reversibility testing.

24

Belyaev,V.I., D.V.Gorskiy, D.A.Stupin, and A.N.Konyshkov. "Evaluation of prevailing draft loads in the couplings of the electric trains." RUSSIAN RAILWAY SCIENCE JOURNAL 81, no.4 (December29, 2022): 297–305. http://dx.doi.org/10.21780/2223-9731-2022-81-4-297-305.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Introduction. Until now, the requirements for manual and automatic couplers for locomotive-hauled passenger carriages and electric trains have been completely identical, despite the significantly lower level of longitudinal forces during the movement of multi-unit rolling stock with distributed traction. The introduction of specialised designs of coupling devices for electric trains led to the development of strength requirements that enable the creation of new models with smaller dimensions and weight. Operating experience has shown that parts of couplings may be destroyed, leading to the uncoupling of trains. Fatigue damage may cause accidents. However, there were studies of the draft loads of electric trains couplers in operation, on the basis of which it was possible to develop regulatory requirements for specialised coupling devices.Materials and methods. The authors measured the forces acting in the couplings of electric trains of various categories and types during normal operation, as well as the frequencies of the occurrence of forces of various levels. The article includes histograms based on the results of processing the received data showing the distribution of longitudinal forces acting on the inter-car couplers and the parameter characterising the accumulation of fatigue damage per 100,000 km. Results. According to the results of processing of the experimental data, the calculation of the accumulation of fatigue damage to the couplers was carried out and the requirements for their expected service life were established. Test procedures have also been developed to evaluate strength and fatigue resistance under high-cycle and low-cycle loading conditions. The article provides ground for introducing a correction factor into the procedure for processing test results, which makes it possible to perform life tests on equipment with different values of the load cycle asymmetry factor.Discussion and conclusion. The authors have evaluated the draft loads in coupling devices during the operation of electric trains, on the basis of which technical requirements and test methods were developed and approved by JSC Russian Railways. Development and acceptance of all new models of coupling devices for electric trains will be carried out in accordance with these regulatory documents.

25

Litvinov,RustemI., Henry Shuman, JoelS.Bennett, and JohnW.Weisel. "Multi-Step Fibrinogen-αIIbβ3 Binding/Unbinding Revealed at the Single Molecule Level Using Laser Tweezers." Blood 104, no.11 (November16, 2004): 623. http://dx.doi.org/10.1182/blood.v104.11.623.623.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract The regulated ability of the platelet integrin αIIbβ3 to bind fibrinogen plays a crucial role in platelet aggregation and primary hemostasis. We have developed a model system based on laser tweezers that enables us to measure directly the mechanical forces needed to separate single receptor-ligand complexes. In this system, a ligand-coated bead can be trapped by the laser and repeatedly brought into contact with a receptor-coated pedestal so that the forces required to separate the two can be measured and displayed as “rupture force” histograms. We have applied this system to the interaction of fibrinogen and αIIbβ3 by measuring the rupture force required to separate a laser-trapped bead coated with fibrinogen from an immobilized pedestal coated with purified αIIbβ3. These measurements revealed that bimolecular αIIbβ3-fibrinogen interactions are characterized by rupture forces ranging from 20 pN to 150 pN. We found that the yield strength of αIIbβ3-fibrinogen interactions was independent of the rate at which force was applied over a range of 160 to 16,000 pN/s, but the frequency of fibrinogen binding to αIIbβ3 during repeated contacts correlated strongly with the duration of contact between the αIIbβ3- and fibrinogen-coated surfaces. The observed rupture forces could be segregated into small (20–40 pN), intermediate (40–60 pN), and high (>60 pN) force regimes, each of which displayed unique kinetic behavior. Thus, the contact time needed to reach half maximal fibrinogen binding at a constant loading rate of 1,600 pN/s was ~ 1 ms for the 20–40 pN, 20 ms for the 40–60 pN, and 70 ms for the >60 pN force regimes. Increasing the surface density of immobilized fibrinogen changed the probability of αIIbβ3 binding to fibrinogen >3.5-fold, but did not affect the yield strength profile, implying that these measurements represent interactions between individual molecules. Each force regime also differed in its susceptibility to the inhibitory effect of αIIbβ3 antagonists and to the activating effect of Mn2+. Thus, the low molecular weight αIIbβ3 antagonists H12 peptide, tirofiban, and RGDS were most effective in inhibiting weak interactions, whereas the monoclonal antibodies A2A9 and abciximab were most effective in inhibiting the stronger ones. In the presence of Mn2+, the strongest component of the yield force distribution increased more than 2.5-fold while the moderate force component increased only 1.5-fold. In addition, rupture forces in the range of 60 to 150 pN disappeared gradually when αIIbβ3 preparations were stored at 4°C for 7 d, suggesting that the αIIbβ3 conformation corresponding to these rupture forces is labile. Taken together, these data suggest that fibrinogen binding to αIIbβ3 is a complex, time-dependent multi-step process during which the strength of the bond between αIIbβ3 and fibrinogen appears to progressively increase.

26

Litvinov,RustemI., Henry Shuman, DavidH.Farrell, JoelS.Bennett, and JohnW.Weisel. "Interaction of the Integrin αIIbβ3 with Monomeric Fibrin at the Single-Molecule Level." Blood 114, no.22 (November20, 2009): 4018. http://dx.doi.org/10.1182/blood.v114.22.4018.4018.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract Abstract 4018 Poster Board III-954 Although platelets aggregated with fibrin constitute major components of in vivo blood clots and thrombi, heretofore most research has focused on fibrinogen's role in platelet aggregation. The integrin αIIbβ3 has been shown to primarily mediate platelet-fibrin interactions, but the mechanism of αIIbβ3 binding to fibrin is largely unknown and may be significantly different from binding to fibrinogen. To elucidate mechanisms of platelet interactions with fibrin, we have compared the overall reactivity, binding strength, and specificity towards αIIbβ3 of human fibrinogen vs. monomeric fibrin, using single-molecule optical trap-based rupture force spectroscopy of the surface-bound proteins. In this system, a ligand-coated bead is trapped by the laser and repeatedly brought into contact with a receptor-coated surface so that the forces required to separate the two can be measured and displayed as rupture force histograms. We have applied this technique to the interaction of fibrin(ogen) and αIIbβ3 by measuring the force required to separate a laser-trapped bead coated with either fibrinogen or monomeric fibrin from an immobilized pedestal coated with purified αIIbβ3. Experiments were performed with plasma-purified or recombinant hom*odimeric γA/γA and γ′/γ′ fibrin(ogen)s. The latter protein represents a splicing variant in which the γ chain has a substitution in the C-terminal four amino acids in 400-411 dodecapeptide, the major αIIbβ3-binding site in fibrinogen. Surface-bound monomeric fibrin was obtained by treating fibrinogen-coated surfaces with thrombin. Control integrin-fibrinogen interactions manifested as a characteristic bimodal rupture force histogram with rupture forces ranging from 20 pN to 140 pN, similar to what we had observed previously (PNAS, 2002, 99, 7426). The interactions were highly sensitive to inhibitory effects of abciximab and eptifibatide, specific αIIbβ3 antagonists, and to Mn2+-induced activation, indicating that they were mediated by the functional integrin. Monomeric fibrin γA/γA was at least as reactive towards αIIbβ3 as the parental fibrinogen γA/γA, sometimes exhibiting even higher binding probabilities and slightly stronger rupture forces. Fibrin-integrin interactions were less sensitive to the inhibitory effect of abciximab compared to fibrinogen, suggesting that some additional structures in αIIbβ3, not completely blocked by this Fab fragment, might be involved in binding to fibrin. Similar to fibrinogen, fibrin-integrin interactions were partially sensitive to eptifibatide, cRGD peptide and γC-dodecapeptide, indicating that their specificity is akin to fibrinogen but may not be identical. Fibrinogen γ′/γ′, lacking the established binding site for the integrin, was reactive with αIIbβ3, but the binding strength was somewhat smaller. The effect of replacing the γA with the γ′ chains in fibrinogen was qualitatively similar to competitive inhibition by the γC-dodecapeptide and resulted in complete cutoff of the larger force peak as well as in partial reduction of weaker interactions. That fibrinogen γ′/γ′ maintained its ability to bind αIIbβ3 suggests that the γ400-411 motif is not the only structure involved in the binding of immobilized fibrinogen to the integrin. Fibrin γ′/γ′ was as reactive with αIIbβ3 as fibrinogen γ′/γ′. Both of them interacted with αIIbβ3 in an RGD-sensitive manner. The data show that both surface-bound fibrinogen and monomeric fibrin are highly reactive with the integrin αIIbβ3. Fibrin is somewhat more reactive than fibrinogen in terms of binding probability and strength and is less sensitive to a specific inhibitor, abciximab, suggesting that αIIbβ3-fibrin interactions have distinct specificity. Susceptibility to competitive inhibitors, such as cRGD and γC-dodecapeptide, along with maintenance of integrin-binding activity of fibrinogen γ′/γ′ suggest that the αIIbβ3-binding sites in fibrin(ogen) are complex and include, but are not confined to, the γC-terminal 400-411 motif. Disclosures: No relevant conflicts of interest to declare.

27

Orozco-Levi,M., J.Gea, J.Sauleda, J.M.Corominas, J.Minguella, X.Aran, and J.M.Broquetas. "Structure of the latissimus dorsi muscle and respiratory function." Journal of Applied Physiology 78, no.3 (March1, 1995): 1132–39. http://dx.doi.org/10.1152/jappl.1995.78.3.1132.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The aim of this study was to evaluate whether respiratory function influences the structure of the latissimus dorsi muscle (LD). Twelve patients (58 +/- 10 yr) undergoing thoracotomy were studied. Lung and respiratory muscle function were evaluated before surgery. Patients showed a forced expired volume in 1 s (FEV1) of 67 +/- 16% of the reference value, an FEV1-forced vital capacity ratio of 69 +/- 9%, a maximal inspiratory pressure of 101 +/- 21% of the reference value, and a tension-time index of the diaphragm (TTdi) of 0.04 +/- 0.02. When patients were exposed to 8% CO2 breathing, TTdi increased to 0.06 +/- 0.03 (P < 0.05). The structural analysis of LD showed that 51 +/- 5% of the fibers were type I. The diameter was 56 +/- 9 microns for type I fibers and 61 +/- 9 microns for type II fibers, whereas the hypertrophy factor was 87 +/- 94 and 172 +/- 208 for type I and II fibers, respectively. Interestingly, the histogram distribution of the LD fibers was unimodal in two of the three individuals with normal lung function and bimodal (additional mode of hypertrophic fibers) in seven of the nine patients with chronic obstructive pulmonary disease. An inverse relationship was found between the %FEV1-forced vital capacity ratio and both the diameter of the fibers (type I: r = -0.773, P < 0.005; type II: r = -0.590, P < 0.05) and the hypertrophy factors (type I: r = -0.647, P < 0.05; type II: r = -0.575, P = 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)

28

Uyeda, Chiaki, Keiji Hisayoshi, and Kentaro Terada. "Separation of Particles Composed of a Solid Solution [Mg2SiO4 -Fe2SiO4] in the Sequence of Fe2+ Concentration Using a Pocketsize Magnetic-Circuit." Key Engineering Materials 843 (May 2020): 105–9. http://dx.doi.org/10.4028/www.scientific.net/kem.843.105.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Magentic separation generally required strong magnetic forces induced in ferromagnetic or strong paramagnetic particles; in order to realize the separation in diamagnetic or weak paramagnetic particles, it was necessary to attach magnetic beads or magnetic ions to induce the strong magnetic force. A method to separate mixture of weak magnetic particles by its concentration of paramagnetic ferrous ion is newly proposed, which does not require the abovementioned magnetic attachments. The efficiently of the new method is experimentally examined using a pocketsize magnetic circuit (4.5 cm x2.0 cm x 1.0 cm) and a piece of cross sectional paper (5.0 cm x1.0cm). The separation is based on a principle that velocity of a translating particle, induced by a magnetic volume force in an area of monotonically decreasing field, is uniquely determined only by its magnetic susceptibility (per unit mass) of the particle; the velocity is independent to mass of particle. By examining the spectra of the separated particles recovered on the cross sectional paper, a histogram on Fe concentration is easily obtained for the particles without the need of consuming them.

29

Zhang, Xiaolei, Yanzhong Ju, and Fuwang Wang. "Statistical Analysis of Wind-Induced Dynamic Response of Power Towers and Four-Circuit Transmission Tower-Line System." Shock and Vibration 2018 (2018): 1–18. http://dx.doi.org/10.1155/2018/5064930.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Only one wind field model loading the transmission tower or the tower-line system was investigated in the previous studies, while the influence of two different wind field models was not considered. In addition, only one sample of the wind speed random process was used in the past numerical simulations, and the multiple dynamic response statistical analysis should be carried out. In this paper, statistical analysis of the wind-induced dynamic response of single towers and the transmission tower-line system is performed with the improved accuracy. A finite element model of the transmission tower-line system (the tower consisted of both steel tubes and angel steels) is established by ANSYS software. The analysis was performed by three statistical methods. The effects of the length of the time history and of the number of samples were investigated. The frequency histograms of samples follow the Gaussian distribution. The characteristic statistical parameters of samples were random. The displacements and the axial forces of the low tower are larger than those of the high tower. Two wind field models were applied to simulate the wind speed time history. In field 1 model, Davenport wind speed spectrum and Shiotani coherence function were applied, while in field 2 model Kaimal wind speed spectrum and Davenport coherence function were used. The results indicate that wind field 1 is calmer than wind field 2. The displacements and the axial forces of the tower-line system are less than those of single towers, which indicate damping of wind-induced vibrations by the transmission line. An extended dynamic response statistical analysis should be carried out for the transmission tower-line system.

30

Aligo,EricA., WilliamA.Gallus, and Moti Segal. "Summer Rainfall Forecast Spread in an Ensemble Initialized with Different Soil Moisture Analyses." Weather and Forecasting 22, no.2 (April1, 2007): 299–314. http://dx.doi.org/10.1175/waf995.1.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract The performance of an ensemble forecasting system initialized using varied soil moisture alone has been evaluated for rainfall forecasts of six warm season convective cases. Ten different soil moisture analyses were used as initial conditions in the ensemble, which used the Weather Research and Forecasting (WRF) Advanced Research WRF (ARW) model at 4-km horizontal grid spacing with explicit rainfall. Soil moisture analyses from the suite of National Weather Service operational models—the Rapid Update Cycle, the North American Model (formerly known as the Eta Model), and the Global Forecasting System—were used to design the 10-member ensemble. For added insight, two other runs with extremely low and high soil moistures were included in this study. Although the sensitivity of simulated 24-h rainfall to soil moisture was occasionally substantial in both weakly forced and strongly forced cases, a U-shaped rank histogram indicated insufficient spread in the 10-member ensemble. This result suggests that ensemble forecast systems using soil moisture perturbations alone might not add enough variability to rainfall forecasts. Perturbations to both atmospheric initial conditions and land surface initial conditions as well as perturbations to other aspects of model physics may increase forecast spread. Correspondence ratio values for the 0.01- and 0.5-in. rainfall thresholds imply some spread in the soil moisture ensemble, but mainly in the weakly forced cases. Relative operating characteristic curves for the 10-member ensemble and for various rainfall thresholds indicate modest skill for all thresholds with the most skill associated with the lowest rainfall threshold, a result typical of warm season events.

31

Sanabriabustos,J.R., J.K.RenteríaAguas, C.A.CasasDíaz, and R.oaGuerreroE.E. "DISEÑO E IMPLEMENTACIÓN DE UNA PLATAFORMA ROBÓTICA MÓVIL PARA IDENTIFICACIÓN DE MINAS TERRESTRES ANTIPERSONA EN DIFERENTES TERRENOS DEL TERRITORIO COLOMBIANO." Revista Cientifica TECNIA 25, no.1 (January28, 2017): 25. http://dx.doi.org/10.21754/tecnia.v25i1.19.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Actualmente la labor de desminado realizada por la fuerza pública es considerada un trabajo con un alto índice de mortalidad. El propósito del artículo fue desarrollar e implementar una plataforma robótica teledirigida para la identificación de minas terrestres antipersona en diferentes terrenos; partiendo de un sistema de evasión de obstáculos, mediante el algoritmo de VFH (Vector Field Histogram), mediante el software LabView Robotics. Los resultados obtenidos mostraron que el sistema de tracción tipo oruga permite un mejor desplazamiento, más agarre en los diferentes terrenos en comparación con el sistema de tracción tipo rueda, además se obtuvo un 87,5% de exactitud en la identificación de minas, a partir de un conjunto de 8 materiales diferentes, en comparación con el sistema MICRONTA 4003 utilizado actualmente. El sistema implementado contribuye a la detección de objetos peligrosos sobre diferentes terrenos, convirtiéndose en una alternativa para disminuir los índices de mortalidad en las tareas de desminado. Palabras clave.- Maniobrabilidad, MICRONTA, Plataforma teledirigida, Sistema de tracción. ABSTRACTIn the present time the mine clearance by the security forces is considered a job with a high mortality rate. The purpose of this article was to develop and implement a robotic platform remote control for identification of antipersonnel landmines in various fields; starting from an obstacle avoidance system, for this propose we using the algorithm VFH (Vector Field Histogram) through the LabVIEW Robotics software. The results have showed that the caterpillar type traction system allows better movement, more grip in all fields compared to the type wheel drive system, plus 87.5% accuracy in identifying mines was obtained from a set of 8 different materials, compared with the currently used system MICRONTA 4003. The implemented system contributes to the detection of dangerous objects on different grounds, becoming an alternative to reduce mortality rates in demining. Keywords.- Maneuverability, MICRONTA, Remote control platform, Drive system.

32

Zhang, Ce, Charlotte Christina Roossien, Gijsbertus Jacob Verkerke, Han Houdijk, JuhaM.Hijmans, and Christian Greve. "Biomechanical Load of Neck and Lumbar Joints in Open-Surgery Training." Sensors 23, no.15 (August5, 2023): 6974. http://dx.doi.org/10.3390/s23156974.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The prevalence of musculoskeletal symptoms (MSS) like neck and back pain is high among open-surgery surgeons. Prolonged working in the same posture and unfavourable postures are biomechanical risk factors for developing MSS. Ergonomic devices such as exoskeletons are possible solutions that can reduce muscle and joint load. To design effective exoskeletons for surgeons, one needs to quantify which neck and trunk postures are seen and how much support during actual surgery is required. Hence, this study aimed to establish the biomechanical profile of neck and trunk postures and neck and lumbar joint loads during open surgery (training). Eight surgical trainees volunteered to participate in this research. Neck and trunk segment orientations were recorded using an inertial measurement unit (IMU) system during open surgery (training). Neck and lumbar joint kinematics, joint moments and compression forces were computed using OpenSim modelling software and a musculoskeletal model. Histograms were used to illustrate the joint angle and load distribution of the neck and lumbar joints over time. During open surgery, the neck flexion angle was 71.6% of the total duration in the range of 10~40 degrees, and lumbar flexion was 68.9% of the duration in the range of 10~30 degrees. The normalized neck and lumbar flexion moments were 53.8% and 35.5% of the time in the range of 0.04~0.06 Nm/kg and 0.4~0.6 Nm/kg, respectively. Furthermore, the neck and lumbar compression forces were 32.9% and 38.2% of the time in the range of 2.0~2.5 N/kg and 15~20 N/kg, respectively. In contrast to exoskeletons used for heavy lifting tasks, exoskeletons designed for surgeons exhibit lower support torque requirements while additional degrees of freedom (DOF) are needed to accommodate combinations of neck and trunk postures.

33

Gould, GA, AT Redpath, M.Ryan, PM Warren, JJ Best, DC Flenley, and W.MacNee. "Lung CT density correlates with measurements of airflow limitation and the diffusing capacity." European Respiratory Journal 4, no.2 (February1, 1991): 141–46. http://dx.doi.org/10.1183/09031936.93.04020141.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

We studied 80 subjects (63 M, 17 F; 23-82 yrs) and related lung computerized tomography (CT) density with age, height, spirometry, lung volumes, diffusing capacity and arterial blood gas tensions. These subjects demonstrated a wide range of physiological impairment (forced expiratory volume in one second (FEV1) 8-116% predicted; diffusing capacity (Kco) 15-139% predicted; arterial oxygen tension (Pao2) 38-91 mmHg). They ranged from normal subjects to patients with chronic respiratory failure. Lung density was derived from CT density histograms measured as both mean Emergency Medical Information (EMI) number (EMI scale: 0 = water, -500 = air, EMI number of normal lung tissue range approximately -200 to -450) and the lowest 5th percentile EMI number, the latter value being more likely to represent the density of lung parenchyma. Lung CT density correlated most strongly with airflow obstruction (EMI 5th percentile versus FEV1/forced vital capacity (FVC) % predicted, r = 0.73, p less than 0.001) and diffusing capacity (EMI 5th percentile versus Kco, r = 0.77, p less than 0.001). This suggests that reduction in lung density, which reflects loss of the surface area of the distal airspaces, is a major index of respiratory function in patients with smoking related chronic obstructive pulmonary disease (COPD). These data provide no indication of other factors such as small and large airways disease, and loss of elastic recoil, which may contribute to airflow limitation, or disruption of the pulmonary vascular bed which may also affect CT lung density.

34

Sadeghi, Negin, Koen Robbelein, Pietro D’Antuono, Nymfa Noppe, Wout Weijtjens, and Christof Devriendt. "Fatigue damage calculation of offshore wind turbines’ long-term data considering the low-frequency fatigue dynamics." Journal of Physics: Conference Series 2265, no.3 (May1, 2022): 032063. http://dx.doi.org/10.1088/1742-6596/2265/3/032063.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract The mutual and simultaneous action of external variable forces on (offshore) wind support structures causes fatigue. Fatigue analysis in this context relies on 10-minute-long strain signals. Cycle-counting allows capturing the fatigue cycles nested within these signals but inevitably leaves some open loops called half-cycles or residuals. Some other loops are not even cycle-counted because the algorithm cannot catch low-frequency (LF) cycles spanning more than 10 minutes, as caused, e.g., by wind speed variations. Notwithstanding, LF cycles are also the most damaging since they always contain the highest range in the variable amplitude signal. Therefore, counting multiple 10-minute signals and merging the resulting histograms into one inevitably has some non-conservative effects, for it leaves the LF cycles uncounted. In this work, we avoid signal concatenation to recover the LF effect. To do so, we use the residuals sequence from the 10-minute signals as it embeds the LF information. As method validation, we compare the impact of LF fatigue dynamics recovery on the linearly accumulated damage using a real-life dataset measured at a Belgian offshore wind turbine. By defining a factor that incorporates the LF effect, we observe that after 300 days of observation, the factor converges to a fixed value.

35

Gugger,M., G.Gould, M.F.Sudlow, P.K.Wraith, and W.MaCnee. "Extent of pulmonary emphysema in man and its relation to the loss of elastic recoil." Clinical Science 80, no.4 (April1, 1991): 353–58. http://dx.doi.org/10.1042/cs0800353.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

1. We assessed lung density, determined by computerized tomography, as a measure of emphysema and related this to lung function and measurement of the elastic recoil of the lung in normal subjects and patients with chronic obstructive lung disease. 2. We found a significant correlation between measurements of elastic recoil pressure at 90% of total lung capacity and both the forced expiratory volume in 1 s (r = 0.80, P <0.001) and the transfer factor for carbon monoxide (r = 0.70, P <0.001). Measurements of elastic recoil of the lung also correlated with lung density as measured by computerized tomography scanning (P <0.001). 3. Multiple regression analysis demonstrated a correlation between the density of the lowest fifth percentile of the computerized tomography lung-density histogram, and both the natural logarithm of the shape parameter of the pressure-volume curve (P <0.01), and the transfer factor for carbon monoxide (P <0.01). However, the mean computerized tomography lung density correlated, in addition, with the elastic recoil pressure of the lungs at 90% of total lung capacity (P <0.001). 4. Since the elastic recoil pressure correlates with computerized tomography lung density, and hence with emphysema, and since elastic recoil pressure also correlates with the forced expiratory volume in 1 s, these results suggest that loss of elastic recoil is one determinant of airflow limitation in patients with chronic obstructive lung disease.

36

Falandys, Karol, Krzysztof Kurc, Andrzej Burghardt, and Dariusz Szybicki. "Automation of the Edge Deburring Process and Analysis of the Impact of Selected Parameters on Forces and Moments Induced during the Process." Applied Sciences 13, no.17 (August25, 2023): 9646. http://dx.doi.org/10.3390/app13179646.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The article concerns the possibility of the automation and robotization of the process of deburring jet engine components. The paper presents the construction of a laboratory stand enabling the automation of selected production operations of typical low-pressure turbine blades. The work identifies important parameters and results of the technological process related to the removal of burrs that affect the exactness of the process. The results of the analysis of the impact of individual process parameters on the magnitude of forces and moments occurring during deburring were carried out and presented. The results of initial and detailed tests were presented. Based on the results obtained, it was noticed that doubling the rotational speed of the brush results in a linear increase in torque and an increase in the engagement of the detail in the disc brush, leading to a non-linear increase in torque. It has also been shown that with tool wear, the value of the torque generated by the rotating tool decreases. Based on the results of a comparison of manual and automated process and histogram analysis, results from an automated stand are centered more correctly inside of the required radius range. This means that the repeatability of the process is higher for an automated test stand, which is one of the key aspects of large-scale aviation component manufacturing. Additionally, it was confirmed by visual inspection that all burs had been removed correctly—the deburring operation for all tested work pieces was successful. Based on the results obtained, it was proven that introduction of an automated stand can improve working conditions (by the elimination of the progressive fatigue of employees and the possibility for injury) and allows for the elimination of the negative impact of the machining process on workers. Further areas in which the optimization of the process parameters of the edge deburring can be developed in order to reduce unit costs have also been indicated.

37

Vlasov, Konstantin Sergeevich, and Aleksandr Alekseevich Poroshin. "Study of regional characteristics of the parameters of large fires." Technology of technosphere safety 96 (2022): 82–91. http://dx.doi.org/10.25257/tts.2022.2.96.82-91.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Introduction. The description of the scientific and methodological approach to determining the parameters of large fires, taking into account the characteristics of the operational activities of fire and rescue garrisons in the subjects of the Russian Federation, is given. Using Big Data technologies, data arrays were investigated and a method for classifying fires into categories "ordinary" and "large" was proposed, taking into account the parameters describing fire extinguishing actions. The following indicators are considered: the time interval of employment of fire departments in a fire; the number of mobile fire-rescue equipment involved in a fire; the number of devices for supplying fire extinguishing agents used in a fire, etc. The method makes it possible to determine the parameters of large fires, taking into account the peculiarities of the subjects of the Russian Federation, which makes it possible to develop management solutions for building up forces and means to extinguish large fires. Purpose and objectives. The purpose of the study is to develop a method for assessing the parameters of large fires, taking into account the characteristics of the operational activities of fire and rescue units. The objective of the study is to determine the parameters of large fires, taking into account the peculiarities of the subjects of the Russian Federation. Methods. The research uses methods of mathematical statistics and Big Data technologies. The initial information for the calculations used statistical data on fires and operational activities of fire and rescue units for the period from 2010 to 2021. Results and discussion. Taking into account the individual characteristics of the subjects of the Russian Federation, the parameters of large fires are determined. The obtained parameters make it possible to determine the conditions for the use of forces and means of fire and rescue units in a particular subject of the Russian Federation. Conclusions. The developed method makes it possible to determine the possible scale of the development of large fires on the territory of the subject of the Russian Federation. That, accordingly, will determine the necessary composition of the forces and means of the fire and rescue garrison to ensure the possibility of concentrating fire protection resources necessary for the successful elimination of a major fire. Keywords: large fire, operational activity, busy time, mobile fire-rescue equipment, fire extinguishing devices, histogram, Python programming language, frame, hyperbole.

38

Trivedi, Anusua, Caleb Robinson, Marian Blazes, Anthony Ortiz, Jocelyn Desbiens, Sunil Gupta, Rahul Dodhia, et al. "Deep learning models for COVID-19 chest x-ray classification: Preventing shortcut learning using feature disentanglement." PLOS ONE 17, no.10 (October6, 2022): e0274098. http://dx.doi.org/10.1371/journal.pone.0274098.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics in a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process. This technique forces the models to identify pulmonary features from the images and penalizes them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets.

39

Alves, Patrícia, Joana Maria Moreira, João Mário Miranda, and Filipe José Mergulhão. "Analysing the Initial Bacterial Adhesion to Evaluate the Performance of Antifouling Surfaces." Antibiotics 9, no.7 (July17, 2020): 421. http://dx.doi.org/10.3390/antibiotics9070421.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The aim of this work was to study the initial events of Escherichia coli adhesion to polydimethylsiloxane, which is critical for the development of antifouling surfaces. A parallel plate flow cell was used to perform the initial adhesion experiments under controlled hydrodynamic conditions (shear rates ranging between 8 and 100/s), mimicking biomedical scenarios. Initial adhesion studies capture more accurately the cell-surface interactions as in later stages, incoming cells may interact with the surface but also with already adhered cells. Adhesion rates were calculated and results shown that after some time (between 5 and 9 min), these rates decreased (by 55% on average), from the initial values for all tested conditions. The common explanation for this decrease is the occurrence of hydrodynamic blocking, where the area behind each adhered cell is screened from incoming cells. This was investigated using a pair correlation map from which two-dimensional histograms showing the density probability function were constructed. The results highlighted a lower density probability (below 4.0 × 10−4) of the presence of cells around a given cell under different shear rates irrespectively of the radial direction. A shadowing area behind the already adhered cells was not observed, indicating that hydrodynamic blocking was not occurring and therefore it could not be the cause for the decreases in cell adhesion rates. Afterward, cell transport rates from the bulk solution to the surface were estimated using the Smoluchowski-Levich approximation and values in the range of 80–170 cells/cm2.s were obtained. The drag forces that adhered cells have to withstand were also estimated and values in the range of 3–50 × 10−14 N were determined. Although mass transport increases with the flow rate, drag forces also increase and the relative importance of these factors may change in different conditions. This work demonstrates that adjustment of operational parameters in initial adhesion experiments may be required to avoid hydrodynamic blocking, in order to obtain reliable data about cell-surface interactions that can be used in the development of more efficient antifouling surfaces.

40

Senthilbabu,S., B.K.Vinayagam, and J.Arunraj. "Roughness Analysis of the Holes Drilled on Al / SiC Metal Matrix Composites Using Atomic Force Microscope." Applied Mechanics and Materials 766-767 (June 2015): 837–43. http://dx.doi.org/10.4028/www.scientific.net/amm.766-767.837.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Metal matrix composites (MMCs) possess significantly improved properties including high strength to wear ratio, high modulus, superior wear resistance and corrosion resistance compared to unreinforced alloys. In spite of the advantages, the Aluminium based metal matrix composites have not yet found a wide employment in the commercial applications, because the the matrix cause serious problems in machining / drilling, such as high drilling forces, tool wear, and poor surface finish due to the hard particles. The drilling of Al/SiC metal matrix composite has received a serious attention for many years. Different specimens were prepared from Aluminium / Silicon carbide (Sic) metal matrix composites by stir casting method by varying the weight percentage of Sic and the investigation has been carried out on the specimen by drilling holes on them. Surface finish is an important factor to be taken into account in the drilling process. So, it is more necessary to predict the average surface roughness of the materials. Then the roughness of the drilled holes made on the specimen was studied using Atomic force microscope. The investigation was carried out with their characteristics being studied by comparing various parameters such as cutting speed, feed rate and so on. The average roughness was studied on 30µm X 30µm and 15 µm X 15µm horizontal section and vertical section. Their histograms were also studied.

41

Y.Y.Shayakhmetov, Y.T.Abilmazhinov, D.K.Dukenbayev, S.S.Shakhova, and R.A.Sovetbayev. "ALGORITHM FOR PREDICTING THE ROUGHNESS OF THE INNER SURFACE DURING TURNING PROCESS." Science and Technology of Kazakhstan, no.4,2023 (December29, 2023): 100–109. http://dx.doi.org/10.48081/wuty7402.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The article analyzes the main technological factors affecting the quality of parts, specifically the quality of the treated surface or the roughness of the surface of the part. Turning (boring) was considered as an example, and a number of measures aimed at improving the quality of the surface at the design stage were also proposed. A simulation stochastic model of surface roughness shaping during surface turning was proposed. Based on this model, an algorithm and a computer program were developed to calculate the roughness at the design stage. The initial data for modeling are the main angle in the plan φ, the auxiliary angle in the plan φ1, the radius at the tip of the cutter r and the feed S, as factors that most strongly affect the roughness during boring. An experiment was carried out, where not only average values were obtained, but also histograms of the distribution of roughness parameters, the adequacy of this model was proved. When calculating the roughness, additional factors were taken into account, for example, the condition of the machine, which made it possible to correct the simulation stochastic model and introduce vibration components associated with the forces acting during cutting and elastic squeezes of the cutter during boring process. Keywords: turning processing, surface quality, surface roughness, imitating model, stochastic model.

42

Bushmakova,MariaА., and ElenaV.Kustova. "Modeling vibrational relaxation rate using machine learning methods." Vestnik of Saint Petersburg University. Mathematics. Mechanics. Astronomy 9, no.1 (2022): 113–25. http://dx.doi.org/10.21638/spbu01.2022.111.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The aim of the present study is to develop an efficient algorithm for simulating nonequilibrium gas-dynamic problems using the detailed state-to-state approach for vibrationalchemical kinetics. Optimization of the vibrational relaxation rate computation using machine learning algorithms is discussed. Since traditional calculation methods require a large number of operations, time and memory, it is proposed to predict the relaxation rates instead of explicit calculations. K-nearest neighbour and histogram based gradient boosting algorithms are applied. The algorithms were trained on datasets obtained using two classical models for the rate coefficients: the forced harmonic oscillator model and that of Schwartz-Slawsky-Herzfeld. Trained algorithms were used to solve the problem of spatially hom*ogeneous relaxation of the O2-O mixture. Comparison of accuracy and calculation time by different methods is carried out. It is shown that the proposed algorithms allow one to predict the relaxation rates with good accuracy and to solve approximately the set of governing equations for the fluid-dynamic variables. Thus, we can recommend the use of machine learning methods in nonequilibrium gas dynamics coupled with detailed vibrational-chemical kinetics. The ways of further optimization of the considered methods are discussed.

43

Shevchenko,A.M., G.N.Nachinkina, and M.V.Gorodnova. "Development of the Energy Method and Study of Algorithms for Predicting the Takeoff Trajectory of Aircraft." Mekhatronika, Avtomatizatsiya, Upravlenie 21, no.6 (June4, 2020): 366–74. http://dx.doi.org/10.17587/mau.21.366-374.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The article is devoted to the development and research of the method of predicting the events on the trajectory of takeoff, climb and overcome highrise obstacles on the course. The prediction method is based on the energy approach to flight control, created by us in previous works. The mathematical formulation of the method is the energy balance equation describing the mutual influence of all acting forces in the aircraft-engine-environment" system. In this article, the equation is extended to ground modes of movement along the runway. For this, a term is added to the equation that takes into account the action of braking forces from the chassis. The balance equation allows us to directly obtain an algorithm for calculating the length of the forward trajectory required for the accumulation of the required amount of terminal energy. Takeoff trajectory includes ground and air segments. Therefore, the possibility of overcoming the obstacle is fixed by the algorithm at the point of a possible decision to takeoff, taking into account the next air segment. This point is reached before the decision-making airspeed prescribed by the flight manual is reached. This advance warning of the possibility of takeoff improves situational awareness of the pilot, which reduces stress and reduces the risk of erroneous actions. A computer stand was developed for the research. Single launches and series of statistical tests are possible. Takeoff scenarios are generated in the stand operator window, as well as initial runway conditions, atmospheric disturbances, aircraft configuration, and obstacle coordinates are specified. The parameters of random errors in the takeoff weight and centering are assigned, as well as the noise of longitudinal overload measurements are set. A large volume of deterministic and statistical tests of algorithms for predicting events on the takeoff trajectory was performed at the stand. Using the statistical analysis module, the characteristics of the range prediction errors to the take-off decision point and the nose wheel separation point were calculated. Probability densities and histograms of error distribution over five characteristic zones of deviations from the mean are constructed. The confidence intervals for calculating the mathematical expectation are obtained. A prototype of an electronic indicator of the takeoff trajectory with marks of predictive characteristic coordinates has been developed.

44

Huang, Shiang-Fen, Chia-Chang Huang, Kun-Ta Chou, Yu-Jiun Chan, Ying-Ying Yang, and Fu-Der Wang. "Chronic Pulmonary Aspergillosis: Disease Severity Using Image Analysis and Correlation with Systemic Proinflammation and Predictors of Clinical Outcome." Journal of Fungi 7, no.10 (October7, 2021): 842. http://dx.doi.org/10.3390/jof7100842.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

(1) Background: The presentation of chronic pulmonary aspergillosis (CPA) ranges from single granuloma to fibrosis in the affected lung. CPA can be divided into five categories according to European Respirology Society (ERS) guidance but is usually assessed by clinical physicians. Computer-based quantitative lung parenchyma analysis in CPA and its correlation with clinical manifestations, systemic inflammation, and angiogenesis have never been investigated. (2) Method: Forty-nine patients with CPA and 36 controls were prospectively enrolled. Pulmonary function tests (forced vital capacity (FVC), forced expiratory volume in one second (FEV1), and FEV1/FCV) and biomarkers in the peripheral blood (the chemokines interleukin (IL)-1B, IL-6, IL-10, IL-8, CRP, ESR, MMP1, MMP7, MMP8, TNF-α, calprotectin, SDF-1α, and VEGFA) were measured before antifungal treatment. The disease severity was categorized into mild, moderate, and severe based on chest computed tomography (CT) images. The oxygen demand and overall mortality until the end of the study were recorded. Quantitative parenchyma analysis was performed using the free software 3Dslicer. (3) Results: The results of quantitative parenchyma analysis concorded with the visual severity from the chest CT, oxygen demand, FVC, and FEV1 in the study subjects. The decrease in kurtosis and skewness of the lung density histograms on CT, increase in high attenuation area (HAA), and reduced lung volume were significantly correlated with increases in the PMN %, CRP, IL-1B, SDF-1α, MMP1, and Calprotectin in peripheral blood in the multivariable regression analysis. TNF-α and IL-1B at study entry and the CPA severity from either a visual method or computer-based evaluation were predictors of long-term mortality. (4) Conclusion: The computer-based parenchyma analysis in CPA agreed with the categorization on a visual basis and was associated with the clinical outcomes, chemokines, and systemic proinflammation profiles.

45

Karremann,M.K., J.G.Pinto, P.J.vonBomhard, and M.Klawa. "On the clustering of winter storm loss events over Germany." Natural Hazards and Earth System Sciences 14, no.8 (August8, 2014): 2041–52. http://dx.doi.org/10.5194/nhess-14-2041-2014.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract. During the last decades, several windstorm series hit Europe leading to large aggregated losses. Such storm series are examples of serial clustering of extreme cyclones, presenting a considerable risk for the insurance industry. Clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis data sets and observations from German weather stations are considered for 30 winters. Histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative binomial distributions. Over 4000 years of general circulation model (GCM) simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between distributions, for example 40 to 65 years for the 1990 series. For such less frequent series, estimates obtained with the Poisson distribution clearly deviate from empirical data. The negative binomial distribution provides better estimates, even though a sensitivity to return level and data set is identified. The consideration of GCM data permits a strong reduction of uncertainties. The present results support the importance of considering explicitly clustering of losses for an adequate risk assessment for economical applications.

46

Karremann,M.K., J.G.Pinto, P.J.vonBomhard, and M.Klawa. "On the clustering of winter storm loss events over Germany." Natural Hazards and Earth System Sciences Discussions 2, no.3 (March4, 2014): 1913–43. http://dx.doi.org/10.5194/nhessd-2-1913-2014.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Abstract. During the last decades, several windstorm series hit Europe leading to large aggregated losses. Such storm series are examples of serial clustering of extreme cyclones, presenting a considerable risk for the insurance industry. Clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis datasets and observations from German weather stations are considered for 30 winters. Histograms of events exceeding selected return levels (1, 2 and 5 year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative Binomial distributions. Over 4000 years of global circulation model simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between distributions, for example 40 to 65 years for the 1990 series. For such less frequent series, estimates obtained with the Poisson distribution clearly deviate from empirical data. The negative Binomial distribution provides better estimates, even though a sensitivity to return level and dataset is identified. The consideration of GCM data permits a strong reduction of uncertainties. The present results support the importance of considering explicitly clustering of losses for an adequate risk assessment for economical applications.

47

Bataju, Sailesh, and Nurapati Pantha. "Hydrophobicity of an isobutane dimer in water, methanol and acetonitrile as solvents — A classical molecular dynamics study." International Journal of Modern Physics B 33, no.32 (December30, 2019): 1950391. http://dx.doi.org/10.1142/s0217979219503910.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The potential of mean forces (PMFs) has been determined for an isobutane dimer in various solvent environments such as water, methanol and acetonitrile at a temperature of 298 K and pressure of 1 bar using GROMACS software. All the molecular dynamics (MD) simulations are carried out using a TIP3P water model under a CHARMM36 forcefield. Following Umbrella Sampling technique, PMFs are calculated and analyzed using Weighted Histogram Analysis Method (WHAM) and coordination number of first solvation shell is extracted for all solvents using radial distribution function. The shape of PMFs contains contact minima, solvent-separated minima and desolvation maxima. The values of contact minima are not affected much by solvent environment and found to be at 0.5377, 0.5480 and 0.5495 nm for water, methanol and acetonitrile respectively. The corresponding energy depths are found −0.9134, −0.7080 and −0.5295 kcalmol[Formula: see text]. The variation observed at solvent-separated minima is noticeable and found at 0.9012, 0.9721 and 0.9151 nm for water, methanol and acetonitrile, respectively. The coordination number of the first solvation shell by taking an isobutane molecule as a reference from their center of mass is found to be 28.1, 16.9 and 14.8 for water, methanol and acetonitrile, respectively. There is a soft hydrophobic interaction between isobutane dimer and solvents like methanol and acetonitrile relative to water, might be due to the presence of competitive methyl group of methanol and acetonitrile in the solvent medium.

48

Wang, Huihui, Xueyu Zhang, Pengpeng Li, Jialiang Sun, Pengtao Yan, Xu Zhang, and Yanqiu Liu. "A New Approach for Unqualified Salted Sea Cucumber Identification: Integration of Image Texture and Machine Learning under the Pressure Contact." Journal of Sensors 2020 (November11, 2020): 1–13. http://dx.doi.org/10.1155/2020/8834614.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

At present, rapid, nondestructive, and objective identification of unqualified salted sea cucumbers with excessive salt content is extremely difficult. Artificial identification is the most common method, which is based on observing sea cucumber deformation during recovery after applying-removing pressure contact. This study is aimed at simulating the artificial identification method and establishing an identification model to distinguish whether the salted sea cucumber exceeds the standard by means of machine vision and machine learning technology. The system for identification of salted sea cucumbers was established, which was used for delivering the standard and uniform pressure forces and collecting the deformation images of salted sea cucumbers during the recovery after pressure removal. Image texture features of contour variation were extracted based on histograms (HIS) and gray level cooccurrence matrix (GLCM), which were used to establish the identification model by combining general regression neural networks (GRNN) and support vector machine (SVM), respectively. Contour variation features of salted sea cucumbers were extracted using a specific algorithm to improve the accuracy and stability of the model. Then, the dimensionality reduction and fusion of the feature images were achieved. According to the results of the models, the SVM identification model integrated with GLCM (GLCM-SVM) was found to be optimal, with accuracy, sensitivity, and specificity of 100%, 100%, and 100%, respectively. In particular, the sensitivity reached 100%, demonstrating an excellent identification ability to excessively salted sea cucumbers of the optimized model. This study illustrated the potential for identification of salted sea cucumbers based on pressure contact by combining image texture of contour varying with machine learning.

49

Stein,JensV., Guiying Cheng, BrittM.Stockton, BrianP.Fors, EugeneC.Butcher, and UlrichH.vonAndrian. "L-selectin–mediated Leukocyte Adhesion In Vivo: Microvillous Distribution Determines Tethering Efficiency, But Not Rolling Velocity." Journal of Experimental Medicine 189, no.1 (January4, 1999): 37–50. http://dx.doi.org/10.1084/jem.189.1.37.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Adhesion receptors that are known to initiate contact (tethering) between blood-borne leukocytes and their endothelial counterreceptors are frequently concentrated on the microvilli of leukocytes. Other adhesion molecules are displayed either randomly or preferentially on the planar cell body. To determine whether ultrastructural distribution plays a role during tethering in vivo, we used pre-B cell transfectants expressing L- or E-selectin ectodomains linked to transmembrane/intracellular domains that mediated different surface distribution patterns. We analyzed the frequency and velocity of transfectant rolling in high endothelial venules of peripheral lymph nodes using an intravital microscopy model. Ectodomains on microvilli conferred a higher efficiency at initiating rolling than random distribution which, in turn, was more efficient than preferential expression on the cell body. The role of microvillous presentation was less accentuated in venules below 20 μm in diameter than in larger venules. In the narrow venules, tethering of cells with cell body expression may have been aided by forced margination through collision with erythrocytes. L-selectin transfected cells rolled 10-fold faster than E-selectin transfectants. Interestingly, rolling velocity histograms of cell lines expressing equivalent copy numbers of the same ectodomain were always similar, irrespective of the topographic distribution. Our data indicate that the distribution of adhesion receptors has a dramatic impact on contact initiation between leukocytes and endothelial cells, but does not play a role once rolling has been established.

50

Barreto-Mota,L., E.M.deGouveiaDalPino, B.Burkhart, C.Melioli, R.Santos-Lima, and L.H.S.Kadowaki. "Magnetic field orientation in self-gravitating turbulent molecular clouds." Monthly Notices of the Royal Astronomical Society 503, no.4 (March19, 2021): 5425–47. http://dx.doi.org/10.1093/mnras/stab798.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

ABSTRACT Stars form inside molecular cloud filaments from the competition of gravitational forces with turbulence and magnetic fields. The exact orientation of these filaments with the magnetic fields depends on the strength of these fields, the gravitational potential, and the line of sight (LOS) relative to the mean field. To disentangle these effects we employ three-dimensional magnetohydrodynamical numerical simulations that explore a wide range of initial turbulent and magnetic states, i.e. sub-Alfvénic to super-Alfvénic turbulence, with and without gravity. We use histogram of relative orientation (HRO) and the associated projected Rayleigh statistics (PRS) to study the orientation of density and, in order to compare with observations, the integrated density relative to the magnetic field. We find that in sub-Alfvénic systems the initial coherence of the magnetic is maintained inside the cloud and filaments form perpendicular to the field. This trend is not observed in super-Alfvénic models, where the lines are dragged by gravity and turbulence and filaments are mainly aligned to the field. The PRS analysis of integrated maps shows that LOS effects are important only for sub-Alfvénic clouds. When the LOS is perpendicular to the initial field orientation most of the filaments are perpendicular to the projected magnetic field. The inclusion of gravity increases the number of dense structures perpendicular to the magnetic field, reflected as lower values of the PRS for denser regions, regardless of whether the model is sub- or super-Alfvénic. The comparison of our results with observed molecular clouds reveals that most are compatible with sub-Alfvénic models.

To the bibliography
Journal articles: 'Histogramme de forces' – Grafiati (2024)

References

Top Articles
Latest Posts
Article information

Author: Frankie Dare

Last Updated:

Views: 6696

Rating: 4.2 / 5 (53 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Frankie Dare

Birthday: 2000-01-27

Address: Suite 313 45115 Caridad Freeway, Port Barabaraville, MS 66713

Phone: +3769542039359

Job: Sales Manager

Hobby: Baton twirling, Stand-up comedy, Leather crafting, Rugby, tabletop games, Jigsaw puzzles, Air sports

Introduction: My name is Frankie Dare, I am a funny, beautiful, proud, fair, pleasant, cheerful, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.