Editorial
Career Ride in Digital Economy: Lack of Digital Skills and Lack of Professionals in Industry
Sunil Khilari*, Tanaji Dabade and Balasaheb Bhamangol
Published : May 18, 2024
DOI : 10.56831/PSEN-04-129
Research Article
On Enforcing Existence and Non-Existence Constraints in MatBase
Christian Mancas*
Published : May 18, 2024
DOI : 10.56831/PSEN-04-130
Literature Review
Damiebi Denni-Fiberesima*
Published : May 27, 2024
Review Article
The Importance of Artificial Intelligence to Africa's Development Process: Prospects and Challenges
Censrehurd, Zemoh Yannick Tangmoh and Pefela Gildas Nyugha*
Published : May 27, 2024
Review Article
Huishan Zhang*
Published : May 27, 2024
The X model hence has four major modelling consideration as 1: Sensors/Inputs shown on Upper-Left of X, 2: Processors/Quantum A.I shown on Upper-Right of X, 3: Controllers shown on Lower-Right of X and finally 4: Outputs/Actuators/Executors shown on Lower-Left of X and the fusion gives complete engineering to Phantom Robots designing. Here in each section of X of Phantom Robots three essential designing aspects are “Analysis to use different colors lights wavelengths/Quanta’s to use as Sensors/Inputs , Processors/Quantum ALUs, Light Controllers and Outputs/Actuators/Executors” using modulation, modes, means, transport, hoping, tunneling, Spins , Amplification, Stimulated Emission, multiplications, rotation, reflection, transformation, splits, beam narrowing, beam expansion, diffractions, reflection, refraction of lights/Wavelengths/Quanta’s/Photons respectively. These are further as indicated on X model of Phantom Robotics Engineering as 1: Sensors/Inputs with modelling layers 1-L1, 1-L2 and 1-L3, 2: Processors/Quantum A.I with modelling layers 2-L1, 2L2 and 2-L3, 3: Controllers with modelling layers 3-L1, 3-L2 and 3-L3 and finally: Outputs/Actuators/Executors with modelling layers 4-L1, 4-L2 and 4-L3. The Fig2 is the compact representation of idea in the form of small cube called “X Phantom Robot Optical Illusion Box is a in hand small unit after operate project Phantom/Aura Robot at designed remote distance/direction/place using complicate different wavelengths/Quantum with mix colors light network. The two major benefits if mankind able to get to design such a robotics (Phantom/Aura) form which look like projected multicolor light spot but in actual Intelligence Robots and work similar like A.I based. The major benefit of this QAI based Phantom Robots several times faster, smarter and can travel in space to explore Universe, Stars, Planets, Exo-Planets, Black Holes, Worm Holes, New Life forms, intelligence life forms and Intelligence Alien lives. Equally possibility in several locations of the Universe already such a kind of Intelligence Light forms already exist and act as Intelligence Alien Civilization hence in such possibility Phantom Robots provide great compatibility and close understanding with them with reaching “Speed of Light”. Second thing as Earth-like planet “Proxima Centauri-B” which is discovered nearest exoplanet orbiting to habitable zone of the red dwarf and has signs and possibility of Exoworld with intelligence life forms. Proxima Centauri-B is on 4.24 (ly) Light Years distance from Earth and need to 63000 Years to be reach there and to communicate with intelligence human-like life-forms exist there, which is further NASA trying to reduce in 200 Years with proposed “Tree of Life” Project and “Cubesat”, but still time-travel can reduce if we able to engineer and transmits “Phantom/Aura Space Robots” to Proxima Centauri-B which travel with or more than “Speed of Light” I used the word more than to design such light which directly disappear and appear in deep space from one to another planets with synching such speed of light with the “Speed of Thoughts”. As our thought in a second reach and touch to Proxima Centauri-B in brain imagination. Such speed converts and implement in “Phantom/Aura Robots” with using Ultra Quantum Artificial Intelligence Engineering. In conclusion want to state that after successful implementation of Phantom Robots for space applications we can sync them with incharge human thoughts to achieve speed of thoughts more than of speed of light.
The concept of “Phantom/Aura Robots” I brought with further enhancement in my last research communication on “Quantum Artificial Intelligence (QAI)” published last year and discussed how we can engineer, group and gather various lights as single “Intelligence Light” or can say Optical Artificial Intelligence in other words for clear understandings using various LEDs and LASERS Diodes and another light emitting sources. Hence in present research communication I want to boost it on next level with the origin of idea “Phantom/Aura Robotics” and how one can think about it, how can stat its analysis, engineering and modeling for its implementation, therefore I have developed “X of Phantom Robotics Engineering” model as depicted in fig1. The “X” of phantom engineering stated with same as universal intelligence model aspects “Input, Process, Control and Output” but here in virtual light form so strong analysis required to work out on this fresh idea.
.
In the present article, an ideal equivalent three Degrees of Freedom (DoF) system of a onebay-bridge (that is supported on elastometallic bearings) that has distributed stiffness and mass along its length is given. From the naturally point of view, the bridge has infinity number of degrees of freedoms, but based on the free vibration study, using partial differential equation, a mathematical ideal Three-Degrees of Freedoms system is obtained, where its ideal mass matrix is analytically written at specific mass locations on the bridge. Thus, using this abovementioned three Degree of Freedom system, the first three fundamental mode-shapes of the real bridge are identifying. Moreover, we consider the 3x3 mass matrix, we can attempt an estimation of the future feasible damages on the bridge, if a known technique for the identification of dynamic characteristics applied. Furthermore, the way of installation of a local network of three uniaxial accelerometers must be compatible with the abovementioned Three-Degrees of Freedom. It is noting worthy, this technique can be applied on bridges, where the sense of concentrated mass is fully absent.
.
This paper presents a mini-review on our previous work presented in ref. [1-6] in which the neural networks (NNs) were used for estimating and then detecting the robot’s collisions with the human operator during the cooperation task. This review investigates and compares the designed NN architectures, their application, the resulted mean squared error (MSE) from training, and their effectiveness (%) in detecting the robot’s collisions. This review reveals that the NN is an effective method in estimating and detecting the human-robot collisions.
Keywords; Electrical resistivity; thermal aging; scattering centers; conduction electrons; precipitation process
This work aims to study a new lead-free solder alloy as a substitute for lead-tin alloys in order to solve the problem of protecting human health. The effect of thermal aging on the electrical resistivity of a eutectic Sn-3.5wt%Ag lead-free solder alloy has been investigated. The samples have been aged at the temperature range of 100°C - 160°C. During aging, the precipitation behavior has been followed by the electrical resistivity measurements. The resistivity change of Sn-3.5wt%Ag eutectic alloy rises to a maximum value and then decreases to a constant value at each aging temperature. The maximum value increases as aging temperature increased. The little reduction in resistivity after the maximum value is due to the coalescence and growth of precipitates and then decreasing in the scattering effect to the electrons. Microstructural characterizations using scanning electron microscope (SEM) were conducted to follow in detail the precipitation behavior. The SEM results cleared that the precipitation process increases with increasing thermal aging time and the coarsening of fine precipitates occurs in the samples aged at a long time.
Keywords and phrases: Star Laplace Transform s-step; Laplace Transform; Star Coefficient; Star-System; Equations Matrix
The Laplace transform has many applications in science and engineering because it is a tool for solving differential equations; in this paper we propose a Star-Laplace transform s-step. We give the definition of this Star-Laplace transform s-step of function f(t), t ∈ [0,+∞], some examples and basic properties. We also give the form of its inverse by using the theory of the Laplace Transform.
.
With the mounting demand for oil and gas worldwide, the maintenance of repaired pipelines has posed great importance in recent decades, in which composite materials have eased the way to rehabilitate corroded specimens. This study aims to investigate the failure probability of pipes suffering from internal corrosion repaired by fiber-reinforced composite materials which undergo internal pressure as well as temperature gradient, employing the Monte Carlo method, along with the Spearman rank correlation coefficient signifying the role of input parameters in the level of failure probability.
Keywords: Regioanalism; Modern Regionalism; Modern Hospital; Culture; Tradition, Society
The present article examines the link between modern regionalism and modern hospitals, and states that addressing community culture in architecture is increasingly debated. The contemporary period has brought about extensive changes in most fields, including architectural theories, has put forward diverse and numerous points of view, and in some cases has led to the emergence of new approaches regarding ancient and rooted thoughts. Hence, it is suggested that a relationship is established between the building and human beings, which makes the environment and space better understood by human beings, and a kind of emotional connection is established between the two. Therefore, attention to the new regionalism that is the subject of this article is examined and according to the progress of science and technology, its relationship with medical centers, including hospitals its relationship with medical centers, including hospitals, is examined.
Keywords: gyroscope theory; spinning disc property; inertial torque
The main property of gyroscopic devices is maintaining the axis of a spinning rotor in space. All gyroscopes manifest the action of the inertial torques and motions which mathematical models based on the principle of the mechanical energy conservation. New studies show the external force to the spinning rotor generates the net of the interrelated inertial torques acting around axes of its rotation. The four centrifugal, two Coriolis torques and two torques of the change in the angular momentum are looped around two axes and express the resistance and precession torques of the spinning rotor. Blocking of the gyroscope precessed motion leads to the vanishing of inertial torques around two axes except the precessed torque-generated by the center mass of the spinning rotor. The vanish of inertial torques of the running gyroscope is not well described in known publications that needs clarification. This paper represents the detailed explanation of the physics of the vanish of inertial torques for the case of the blocking of gyroscopic precessed motion.
Keywords: Preventive Medicine; Milk protein; Nutrition proteins; Health informatics; Medication management; HPLC
The nutrition value of milks is commonly evaluated by determining the total amount of nitrogen, amino acids, and proteins. Because of all these methods only base on the determination of a single parameter, lacking high selectivity, a lot of problems have been met. The presentation reports a highly selective method for the quality control of bovine milk by determining the relative amount of five proteins in milks by reversed-phase liquid chromatography. The fingerprint of main nutrition proteins comprising of five regions of bovine milk is firstly established and then it is characterized by a so-called as “characterization curve of nutrition protein, CCNP” which is drawn basing on the relative amount of the peak area of the five group proteins. Any significant changes in the profile of the CCNP qualitatively indicates the existence of some additives in milk, while the magnitude of the deviation from the standard CCNP is the measurement of the changes in the content of each nutrition proteins in milks. With separately adding eve’s milk, soybean milk, and water into a bovine milk, these additives could be quantitatively determined with the RSD below 5% by the presented method.
What is the solution?
1. If the driver's seat is on the left side, if you look in the mirror on the immediate left side, you can see people getting on, off, and falling, as well as oncoming vehicles and pedestrians at the same time. If any passenger falls from the vehicle, the vehicle can be stopped immediately.
2. There is no need to turn the head to the left side so that time can be obtained. The danger due to it can also be avoided.
3. When passing another vehicle on the right side, the driver's seat is on the left side so that the sides of the vehicle on the left side can be clearly seen.
4. When a driver driving a vehicle sitting on the left side on the roads allowing travel in both directions tries to overtake another vehicle on the right side, when a vehicle suddenly comes from the opposite direction, the driver's action is according to the reflex action, so the vehicle on the left side is likely to be in danger, so the vehicle is kept to the left side only with great care.
When overtaking another vehicle on the right side, if the driver's seat is on the left side, the sides of the vehicle on the immediate left can be clearly seen.
5. The life of the driver sitting on the left side of the vehicle is not safe if the vehicle hits the vehicle in front from the rear or side while overtaking another vehicle on the right side of the one way roads. Be very careful and force the other vehicle to pass as your life may be in danger.
6. A driver driving a vehicle sitting on the left side will have to maintain a greater distance from the vehicle in front to see the position of the vehicle coming from the opposite side to overtake the vehicle in front. This reduces the risk of hitting the rear of the vehicle in front.
A driver sitting on the left side of the vehicle is more likely to have an accident while the vehicle moves to the left side to avoid an accident. The driver will realize that his life is in danger if he hits the one immediately to his left. This will prompt the driver to drive cautiously.
The accident which caused the death of 9 people in Vadakancheri took place on a one-way road. It can be seen that the vehicle that caused the accident hit from the rear and right side of the vehicle in front. . Had the driver's seat been on the left side, such an accident would never have happened. The driver himself tried to save the life of the driver. The driver was not seriously injured.
7. If the driver gets out of the left side driver's seat by opening the door, such accidents will be avoided as the two-wheeler coming behind will not be affected.
8. Driver's position in vehicles should be on the left side of the vehicle if pedestrians are also walking on the right side.
In right-hand driving, the advantage is that the driver sits on the right side
1. Pedestrians can be made to walk on the left side as pedestrians have a natural tendency to walk on the left side in right-hand driving. It is also possible to back off to the left side as per the reflex action when any vehicle is approaching.
It is said that pedestrians should walk on the right side of the road in order to see the vehicles coming from the opposite side.
2. If you want to overtake the vehicle while driving on the right side, you should keep more distance from the vehicle in front.
As the driver has an innate tendency to veer to the left side while driving on the right side
On one-way roads, when the driver tries to pass the vehicle in front of him on the left side, the driver's action is based on the reflex action, so the risk of accident is high for the driver.
This defect cannot be avoided as passengers should be prioritized over the driver. A driver who has experienced the risk of an accident many times will be more cautious.
3. If the driver gets down from the right side driver's seat by opening the door, such accidents will be avoided as it will not affect the two-wheeler coming behind.
4. When stopping a two-wheeler while driving on the right side, the vehicle does not lean towards the footpath as the left foot sticks. This will make it more convenient for pedestrians to walk.
If the driver's seat is on the right side of the vehicle then it should be right side driving and the door of the vehicle should be on the right side.
Why walk on the left side?
Indians generally have a tendency to walk on the left side. A soldier starts his march on the left foot. When one stands to attack another, stand with the left leg crossed. The left side is always defensive and retreating.
While walking along the footpath, you can see that most of the people walk on the left side to pass the person coming from the opposite side.
A person or a group of people walking in the middle of the road on a road with few vehicles has been observed to move to the left side of the road when they hear the bell of a bicycle behind them.
Those standing beyond the halfway mark have also been seen shifting to the left side of the road.
A cyclist should hit the ground first with the left foot.
It is wrong to suggest walking on the right side when there is a natural tendency to walk on the left side. Therefore, it is essential that pedestrians walk on the left side, vehicles drive on the right side, and the driver's seat is on the right side of the vehicle.
The need for uniform traffic rules all over the world
Today, many countries have different traffic laws and practices. In some countries you drive on the right side while in some countries you drive on the left side. Similarly there are vehicles with driver's seat on left side and right side.
Because of this, many people do not like to drive for fear of getting into trouble when going to countries that are against the ways they are used to. Many people are not used to driving against a traffic pattern in another country.
Many people travel and work in different countries with different traffic laws on the same day. Different road traffic rules cause various doubts in the mind.
Many existing traffic laws are against the reflex action that comes from our unconscious mind.
Most drivers cannot concentrate on driving full time. Many people think about many other things while driving. Most of the time driving is by reflex action.
When danger occurs, the mind often simultaneously reacts in favor of reflex action but against traffic law. This causes various types of accidents.
If the traffic rules are the same around the world based on reflex action, accidents will be reduced for those who cannot concentrate on driving full time.
It is very important to enforce the same traffic rules all over the world.
Keywords: pedestrians; right hand driving; safety; drivers seat; accident
An anomaly of the current right-hand driver's position
1. It is difficult for the driver sitting on the right side of the vehicle to accurately know the position of the vehicle passing on his left side and the pedestrian passenger.
It is common for pedestrians to be struck by vehicles because the left front of the vehicle, including the driver's seat, cannot be seen clearly enough.
2. Presently the driver's seat is on the right side and the bus door is on the left side. The driver sitting on the right side has to turn his head to the left side to look at the mirror sitting on the left side to see the people getting on and off on the left side.
When the head is turned to the left, it is impossible to see the vehicles coming from the right behind and the opposite direction in front. Similarly, when the head is turned from the left side to the right side, it is not possible to see what is happening on the left side of the vehicle.
Buses do not have doors in most places in India. Do not close the doors. Due to this, a large number of passengers are seriously injured and die every year due to falling from the vehicle, people getting out of the vehicle after the vehicle starts moving, and getting into the vehicle.
3. Take time to turn the head to the left and back to the right to look at the mirror on the left.
This makes it impossible to look in the left side mirror and pay attention to the oncoming vehicle in front at the same time. This causes danger.
4. When overtaking another vehicle on the right side, the overtaking driver's seat is on the right side and cannot see the sides of the vehicle on the left.
5. When a driver driving a vehicle sitting on the right side on the roads that allows travel in both directions tries to pass another vehicle on the right side, when a vehicle suddenly comes from the opposite direction, the driver's action is based on reflex action, without the ability to pay attention to the vehicle on the left side, the driver keeps the vehicle to the left side to save his life
6. When overtaking another vehicle on the right side of one-way roads, even if the vehicle hits the vehicle in front from the rear or side, the life of the driver sitting on the right side is safe. This makes many drivers arrogant.
This is what happened in the accident that resulted in the immediate death of 9 people in Vadakancherry.
The accident took place in Vadakancherry on one way. It can be seen that the vehicle that caused the accident hit from the rear and right side of the vehicle in front.
7. It is comparatively difficult to get out from the right side driver's seat to the left side when the vehicle is stopped, so the driver opens the right side door and gets out to the right side. When the door is opened in this way, there are many incidents of death due to being hit by the following two-wheeler.
The main reason for the sudden death of 9 people in Vadakancherry. Driving on the left side is the driver's seat on the right side.
8. Pedestrians should be given more priority on the road.
In India, vehicles stop and park on the left. Two-wheeler drivers put their left foot on the ground and stop the vehicle by tilting it to the left side. This also causes more inconvenience and danger to the pedestrians walking on the right side.
If the two-wheelers passing by on the left side lean to the right side to avoid causing inconvenience to the pedestrians, the danger to the two-wheeler will increase. If one tilts to the right side and stops the vehicle, the balance will go. The other will go through the leg of the person driving the two-wheeler stopped in front and the wheel of the vehicle of the motorist coming from behind will go up.
Keywords: Photonic Crystal fiber (PCF); Macro-bending Loss; Effective index method (EIM)
This paper highlights the losses arising from bending that plays a vital role in long-term communication purposes. To get better transmission it is very essential to control the losses. Losses arising from bending of photonic crystal fiber (PCF) can also be controlled by altering the structural parameter (d/? and ?) that regulates the photonic crystal fiber structure. Different materials (viz. crown and phosphate) based PCF with lower bending losses has been reported by exploring the whole communication band (0.2-2µm) with variable structural parameters and bending radii. During bending of a fiber, evanescent field energy drives light towards cladding and a subsequent attenuation in the guidance of light energy occurs, which generates losses and can be controlled by altering the structural parameters in the PCF. Therefore, failure in total internal reflection is the major contribution towards these losses. In this study, a very low bending loss ~ 0.11 dB/km and ~0.08 dB/km have been observed for crown and phosphate based PCF. Also, an interesting phenomenon has been observed during the study that with increase in ? for constant d/?, losses are found to increase up to the λ ~1.2µm and then started decreasing with increase in ? which can be useful for the experimentalist during fabrication. Although this work can provide a valuable approach to the engineers to develop a newer kind of PCF model that can lower losses with respective change in material domain.
Keywords: open channel hydraulics; side weir; discharge coefficient; rectangular side weir
More than eighty years ago, De-Marchi indicated an assumption for hydraulics of flow-through side weir. Many researchers who study side weir depended on this assumption in their studies. Many studies for side weir were carried out in different channels, and numerical studies supported the results. This study deal with a comprehensive review of discharge coefficient for side weir.
.
Wireless communication evolution from 1G to 6G, evolution of technology based on demand of the customer requirement. The continuous requirement increases the demand on technology. 6 G is the successor of 5 G technology. Sixth-generation (6G) operates the frequency of THz. Terahertz waves (THz), which are sub millimeter waves sitting between microwave and infrared light on the electromagnetic spectrum, have been used to achieve data rates greater than 100 Gbps It incorporate varies technologies machine-to-machine (M2M) communication. 6G communication support, 3 D media technology optical wireless communication (OWC), 3D networking, unmanned aerial vehicles (UAV), and wireless power transfer and also support Internet of Nano Things (IoNT). 6G will support technology like automated cars and smart-home networks, helping create seamless connectivity between the internet and everyday life. In development for 2030, 6G will support advancements in technology, such as virtual reality (VR), augmented reality (AR), metaverse, and artificial intelligence (AI). China successfully launched the world's first 6G satellite. The satellite uses Terahertz waves that could send data at speeds several times faster than 5G. 12 other Earth-observing satellites were aboard the rocket. The new era start with reality connecting with surface rather than devices. 6 G will become first network which are connected all the three system (space, earth and aerial) in a line.
Keywords: Rehabilitation; COVID-19; SpO2; Monitoring; Oximetry
Implementation of a pulse oximeter with measurement of altitude, heart rate (HR) and oxygen saturation (SpO2) with bluetooth communication, in the need of real-time as well as remote monitoring by the therapist with the purpose of avoiding the spread of the SARS-CoV-2 virus. With the use of bluetooth technology, the data is transmitted in real-time to a mobile phone through an application in the Android operating system, a medical history record is generated using the MIT App Inventor platform, likewise an audible alarm is activated in case the saturation level is out of range, taking into consideration the variation in oxygen saturation depending on the altitude where measurement is taken.
Keywords: Analog Signal; Arduino UNO; Digital Signal; ESP8266; Internet of Things (IoT); Infrared Sensor (IR); Light Emitting Diode (LED); NodeMCU; Object Detection; Signal; Ultrasonic Sensors
Sensors which are connected to IoT board send data to either a local network or the Internet. The Internet of Things is the accretion of connected sensors. It is assumed that in the year 2025 approximately 75.44 billion devices are connected to the Internet of Things. In IoT greatest number of devices will be small sensors that send information about what they are sensing. Here we are going to design how an infrared sensor works with ESP8266 NodeMCU board and also implement these sensors to select object and measure the distance of the object by using ultrasonic sensor. These sensors provides both digital and analog output. Here we are performing only digital output which will be directly connected to a NodeMCU with Arduino platform to read the sensor output. Infrared sensor is used for object detection and Ultrasonic sensor is used for distance measurement.
Keywords: Natural Resources; Revenue sharing; Livelihoods; Sustainable Development; Investment
Most of communities depend on natural resources such as forests, land, and water for their livelihoods. Nature is the source of human health due to daily needs from ecosystem goods and services. From the relationship existing in ecological and economic systems, the aspects like improved living standards of people, livelihoods, biodiversity conservation, and human wellbeing require a multi-disciplinary collaboration between communities and stakeholders to find a good approach that integrates biodiversity conservation and human wellbeing. We are interested to know the status of community based conservation projects in the tropical region and bring the scientific contribution referring to the findings. We reviewed existing documentation on community conservation and compiled the similarities of conservation practices that involve local communities. We did a comparative study in some countries that are located in tropical region. The data show that in all mentioned countries, there is a will to integrate biodiversity conservation and community development but there is a need to improve the policies and regulations and increase the investment in community development projects. We also detected the issue of lack of conservation professionals in the decision making and this causes the reluctance in implementing community conservation projects in some countries in the tropical region. The assessment of the contribution of community conservation projects on improved livelihoods, and sustainable biodiversity conservation in and around the protected areas will help to improve community conservation. There is a need to assess the perceptions of local community towards co-management in biodiversity conservation, and ecosystem services in and around the protected areas in tropical regions.
Keywords: Metamaterial; absorber; stealth; aircrafts; surveillance
This paper suggests a Ku band metamaterial absorber simulation using CST Microwave Studio Software at a frequency ranging from 12-18 GHz [2]. The work proposes a design which has a square based ground Metamaterial Absorber (MA). The overall measurement of the unit cell assembly is 5* 5 mm^2. This assembly covers the Ku band which gives a wideband of absorption [1]. The optimization of structure is achieved at 13.3 GHz and 14.9 GHz respectively. The designed metamaterial absorber serves in stealth technologies. The major applications also include airborne surveillance and military protection.
Keywords: Computational Path Delay; Latency; Vedic Multiplier; Vivado; Speed
We live in a technologically advanced society. The use of diverse electronic gadgets is interwoven with even the most fundamental aspects of our daily lives. They increase and smoothen the pace of our life. The multiplier component controls the speed of most electronic systems with high-speed applications that employ the IEEE 754-2008 standard for single-precision FPUs. Several existing methods have been included to enhance the multiplier's speed of operation. They have, however, not demonstrated a substantial difference in speed, raising it by a maximum of 1.182 times.
As a result, we presented "Vedic Design," a novel algorithm with a distinctive architecture. When this was simulated in Vivado, it improved the multiplier's speed by 3.4478 times, resulting in a multiplier that is nearly 3.5 times more efficient. The gadget is better equipped to function as a result of the reduced computational path latency.
.
Confined Quantum Field Theory is the extension of the moral of the special and general relativity into the quantum domain. Thereby each quantum object is represented by a bounded and connected manifold with a metric as the function of its energy and topology representing type of particle. This gives any quantum object well-defined size and position. This simple and basic statement bring us beyond uncertainty and paradox in the quantum theory and make it much stronger instrument to solve many fundamental problem in many domain in physics. Here I just give some example to demonstrate how this theory works. Take for example emission of photon by accelerating electron. Since both electron and photon is bounded connected manifold and are not probable points and photon is a sub-manifold of electron in the beginning of the emission both manifolds have overlaps and this take a short time before the separation. Therefore energy conservation is valid all the time and here we have no time-energy uncertainty. The cases that show superiority of this theory are numerous. I recommend the book “Confined Quantum Field Theory” second edition.
Keywords: Design Research; Philosophy; Prosperity; Well-being; Life
Design by research is a design methodology within the context of subject(s) for a defined group. Descriptive research performs in the context of qualitative research. Philosophy, behavioural sciences, sociology, economics, and other scientific fields contribute to the design domain. The design ability can potentially change our behaviour and initiations in the social and economic context. Design activities must be forward-looking so that design energy can harness humanity's necessary progress. In the context of the elderly, the vital energy of the elderly will stimulate their valuable knowledge, skills and experience for social and economic development. This positive practice brings satisfaction and happiness to the elderly. It benefits society that expects the elderly to enjoy life by spending their money, but often without acquiring knowledge. According to the philosopher Peter Sloterdijk, changes are necessary for life. A life with initiative leads to the collective growth of the personality. This needed energy comes to us from the universe, which opens up our 'Self' and leads to changes in design thinking in the design domain. Discover perspective in design by looking at the design methods that form an umbrella for the design domain. A holistic approach to design will stick to the context of the design, and the need will change through human thinking and action. All design entities will consider the need for humanity and present it during the design process. Philosophy and research into the phenomenon of design can lead to new insights into how life changes through design. The design has intrinsic energy to transform ideas into material and spiritual prosperity and well-being. Design Research is concerned with design by research and how it may apply by designers from two different cultures of the world but with a convergence.
After identifying the central idea of Nano [1], as "the power of mind over matter", was to characterize with these elements of Lagrangian and Hamiltonian dynamics concepts such as conscience of a particle, due action to a force of the field, the field itself, seen as an intention [2, 3] and to unite the synergistic action of many particles acting together to create an organized transformation on matter.F.1
These are the first elements considered to create a mathematical theory of nanotechnology [1] at the advanced level of field theory, which could be achieved in the coming years. Although the latter is mentioned in a first tribute to me [4], this theory is now reality (figures 1).
While we carry out some interesting research in nanomedicine [5], and nanomaterials [6, 7], being part of the technological developments realized around of the mathematical theory of nanotechnology that are being and have been realized and published in many references.
This study consider the causal structure of the scattering phenomena through past and future light cones, create the possibility of energy for one thousand years, perpetual motion machines and star-gates (worm holes in the space) with advanced analogues as the synchrotronic propulsion (advanced spaceships) and disintegrative mass weapons, using the same principles in all case.
During many years developing many ideas around the electromagnetic propulsion as a source of take-off and movement, including landing, of advanced magnetic levitation vehicles. In the year 2010, [8] starts a research sub-program of his research program, dedicated to the development of advanced vehicles by electromagnetic propulsion.
Several papers are published in more than 20 different journals and book chapters on quantum mechanics and superconductivity [9, 10] for the purpose of proving a consistent theory for the creation of these advanced vehicles and reinforces in that sense, taking advantage of his position as editor-in-chief of the journal on photonics and spintronics, with works by other researchers related to the study of supercinductivity, Majorana fermions, and other related topics (figure 2 and figure 3). F.2
Just as electromagnetic fields are caused by a charge and gravitational fields are caused by weight (mass and force), any rotating objects create torsion fields.
Torsion fields can interact with laser beams (change frequency); creating effects of diverse nature, for example, in the biological processes, where torsion affect directly the ADN. Also can melting or solidifying some materials, affect quartz crystals increasing their properties as resonators. Also affect some electronic components creating radiation coverings. The torsion can favourably change some beverages, and have been noted to affects gravity. F.3
According to this theory, every substance has its own "chronal charge" defined by the quantity of "chronal" particles which were named "chronons". In [13] was supposed that while the object is spinning, "chronons" are interacting with other "chronons" that surround this object and therefore the weight of the object changes. According to A.I. Veinik's theory, "chronons" generate the so called "chronal" field. A.I. Veinik found experimentally that strong "chronal" fields can be generated by spinning masses. In [14] explained through schemes of coverings and measured some properties of "chronal" fields and found that two types of "chronons" exist ("plus" and "minus" chronons). It is important to emphasize that can be concluded that the sign of the "chronon" depended on orientation of their spin. Some of these facts could explain the hyperbolicity of the space and their law of the minimal action and geometrical trajectories “braquistrocrona curve” satisfying this law of minimal action, and consigned in the inertial law inside the Einstein equations with expansion reflected into the Christoffel symbols considered in the gravitation equations.
The spinning space can be consigned in a smooth space (as the apparent uniformity of the space-time in the ground) when the energy fluxes of the spins derive in neutrinos and these full all space of energy. The problem with the manage of the chronogeometry with this particles is the definition of the synchronicity that is required to some process to quantum level to create some process and their integration inside synergic action or “organized transformations” to obtain the reality of the space-time [2].
Actually, and with research groups that I direct, we are considering the torsion [15, 16] as principal effect to generate propulsion in an advanced model of ship, considering the way through electromagnetic plasma [17] (figure 4 A).
Likewise through of a caption and detection camera, is measured the electromagnetic properties of the ionic flow of the space ~IIe (ρ, u), derived from the electromagnetic plasma ~IIH, and is proposed an ionic propeller considering the pressure gradients due to electrons and ions concentrated in a little region of the shock waves produced with an electric field. Also is used mean curvature energy [18, 19] to measure and control the ionic flow. Then researches are in this step (figure 4 B). F.4
We must to create or invent new mathematics to describe a complete quantum mechanics based in the synchronicity; a synchronic quantum mechanics, which is there in the causal structure of the Universe but in a more deep level. All these are theories developed around field theory to define the intentionality of a field and apply it in nanotechnology, on matter.
Studies in condensed matter and MHD, also have been incorporated. The only one energy to all ship (even to nanomedicine process) must come from the reactor of electromagnetic plasma.
References
The central idea of Lagrangian dynamics is to study the movement of particles, even in continuous media, and the causes that originate this movement, which brings an intrinsic transformation of space due to energy. Not so the physical entities, which must be invariant under the choice of transformations of coordinate systems.
Keywords: Face Recognition; Face Detection; Security; Authentication
ML has grown in popularity over the previous decade as a result of strong computers that can process large amounts of data in a reasonable length of time. A well-known problem in Machine Learning is the Dog breed classifier. The issue is determining a dog’s breed. The input can be a dog or a human image, and the proposed algorithm should be able to forecast the dog’s breed or, if the human is a dog, which breeds it belongs to. For human face detection, OpenCV is utilised, while for dog face detection, the VGG16 model is employed. With the ResNet101 architecture, a convolutional neural network is utilized for classification. On test data, the final model had an accuracy of 81 percent.
Keywords: advanced optimization techniques; algorithm-specific parameters; Jaya algorithm; Rao algorithm; teaching-learning-based optimization algorithm
The performance, utilization, reliability, and cost of the system are all improved when optimization techniques are used to solve engineering problems. Researchers have used a number of traditional optimization techniques like geometric programming, nonlinear programming, sequential programming, dynamic programming, etc to solve these problems. Traditional optimization techniques have been effective in many real-world problems, but they have some drawbacks that are primarily caused by the search algorithms they have built into them. The researchers have developed a number of advanced optimization algorithms commonly referred to as metaheuristics, to overcome the limitations of traditional optimization techniques. All of the probabilistic evolutionary and swarm intelligence-based algorithms used to solve optimization problems require common control parameters like population size, generational number, elite size, etc. along with these need their own algorithm-specific control parameters. The effectiveness of these algorithms is significantly influenced by the proper tuning of the algorithm-specific parameters. When tuning algorithm-specific parameters incorrectly, the result is either an increase in computational effort or the local optimal solution. This article presents a review of the application of algorithm-specific parameterless algorithms in electrical engineering applications. This article is expected to play a major role in guiding research scholars in the application of advanced intelligent optimization techniques.
Keywords: Digital model; teaching identity; professionalism; transforming school
To transform education, it is necessary to innovate in knowledge, identity, and teaching practice; that is why the teacher must conceive his pedagogical practice as a way to strengthen his professional identity and exercise his teaching work following trends and challenges of education, showing good preparation, developing research, using technology and reflecting on their teaching practice. In order to improve the development of teacher professionalism and identity (DPID), the sustainable digital model of a transforming school was applied, the design was pre-experimental of an explanatory type, a questionnaire was applied to 31 teachers from a school in Peru, taking into account the dimensions of teacher performance established in the Good Teacher Performance Framework (MBDD), the dimension where there was the greatest problem was chosen. By applying the model, it was possible to reverse the identified problem, increasing the high level to 67.74%.
Keywords: Analytical Hierarchy Process; Profile Matching; Linear Interpolation; Dean’s List
Determining the best dean's list is one of the steps to motivate students to complete their studies at a tertiary institution. However, the process of determining the best candidate dean's list is not an easy thing to make decisions consistently and transparently. So, in this study we combined the AHP and Profile Matching Method, as well as a linear interpolation model with criteria grade point averages (GPA) obtained, subjects taken and criteria repeated subjects. This study aims to provide specific knowledge about how to combination of the AHP-Profile Matching method and the linear interpolation model in building the best Dean's List decision support system. Where the two methods work together to determine the best dean's list according to their respective rules, namely the AHP method to calculate priority levels and criteria consistency values, while the Profile Matching method, to match data values with target data, determine the weight of competency GAP values, calculate the ranking value of each candidate with a priority level value obtained from the results of the calculation by AHP method, and calculating the mapping weight value in decimal form can use the linear interpolation model which will be used as the weight of the competency GAP value. The results showed that the two methods were successfully combined and were able to determine the consistency ratio for each criterion rating scale of 0,030, and were able to determine the best dean's list of 52 candidates, with the highest ranking value of 5,388.
After the Sumerians, people learned to write on clay board. This when people came with the idea of abacus using the wood and clay board. This board is divided into columns with order of base 60 number system. They used different shaped and sized objects in those columns for calculations. The order use to be 1’s, 10’s, 60’s, 600’s, and 3600’s and placing tokens and removing does the addition and subtraction.
After this came modern abacus, as people are used to count with fingers, and we can do this to count till 10. So, to count more than 10 we need more fingers or any object. This made to implement the abacus with base ten i.e., 1’s, 10’s, 100’s, 1000’s…… etc.
Now, the era of computers came and all we have is just electricity to communicate with these electronic devices. So, like morse code dots and dashes to decode alphabets for communicating with people. We have on and off switch to communicate with computer. As calculations are an integral part of computing which is a core functionality of computers, we need to build everything around this as part of communication. This made us to choose the reliable calculation option of 10 finger option of base 10 and reducing it to base 2 to accommodate the 2 level on and off switch functionality. The base 2 binary code has 0’s and 1’s filling the 20, 21, 22, …. etc.
Computers
A computer can store and process data. Most computers use binary code, which uses two variables, 0 and 1, to complete storing data and calculations. Throughout history many prototypes have been developed leading to the modern-day computer. During World War II, physicist John Mauchly, engineer J. Presper Eckert, Jr., and their colleagues at the University of Pennsylvania designed the first programmable general-purpose electronic digital computer, the Electronic Numerical Integrator and Computer (ENIAC). Programming languages, such as C, C++, JavaScript and Python, work using many forms of programming patterns. Programming, which uses mathematical functions to give outputs based on data input, is one of the regular ways to provide instructions for a computer.
Binary Code and Transistors
Computers are made using transistors and they operate based on electricity flow. The binary code is just representation of whether transistor is conducting or not. A simple addition operation using transistors:
Let’s see we want to do 1+1=2, take two transistors allow voltage to flow(transfer) from we get the 2 times the voltage.
if we want 0+1=1, same two transistors but make one resist the voltage allowing only voltage we get just 1 time the voltage.
The bigger the number and operation, the more transistors required in computers for computing.
The modern-day computers make computing so easy. But the computing existed for about 4000 years. It was during the bronze age when Sumer developed and rose to prominence being first urban civilization.
Their Harvests and flocks were not large enough, so to keep track of their livestock and crops using a notch on tally sticks. They used one bigger notch representing ‘10’ and one smaller notch representing ‘1’. They performed calculations by stacking those notches together.
References
Optoelectronics devices are components based on the interaction of light-within different spectral region- with electronics devise. Laser diodes, light emitting diodes LEDs, photodetectors, image sensor, electro optic modulators, opto isolators, phototubes, image intensifiers and photonics integrated circuits are examples of optoelectronics devices. However, these devices are widely used in different applications such as laser technology, optical fiber communications and optical metrology; and the rapid developments in these applications has promoted a demand for high speed or fast response components. For some, the fast response means fast response in communication networks. For others, it refers to high speed imaging which leads the scientists to develop high speed camera with low light and ultra-fast capture without blur. Nevertheless, fast response for many researchers means generating laser light in the attosecond regime, which was easily achieved by [2] and the best is yet to come. Most common way to improve the performance of optoelectronics devices is by using suitable and efficient materials for fabricating these devices. The optoelectronics devices are mainly based on semiconductors materials. However, 2D materials showed an outstanding optical characteristic and performance for fabricating photovoltaic cells, optical fibers, quantum computing, sensing and security [3]. Besides, the 2D materials addressed some challenges for example obtaining high efficiency and speed, lower power consumption and carbon foot print. Recent developments in 2D materials in combining these materials with other structures, yield new tunable ban structure and ultra-high nonlinear coefficiency, ultra-fast carrier mobility. As a consequence, these 2D materials such as black phosphorous BP, graphene and Transition Metal Dihalcogenides TMDs were efficiently utilized in bio-sensing, laser sources, optical communication and photodetector applications [4]. As results, the 2D materials are considered a promising candidates for fabricating an ultra-fast optoelectronics component.
.
Wireless sensor networks (WSNs) are paying attention not only in the industry but also in the sector like academia because of their enormous application potential and exclusive safety challenges. Wireless sensor networks have been used for many applications, from ecological monitoring to logistic, tracking etc. In addition, wireless sensor networks can be used in appliances such as wellbeing monitoring and control, environment and terrestrial monitoring, biomedical health monitoring, home automation, travel control, natural disaster relief and seismic sensing etc. WSN technology made a huge and rapid progress in the early 1990s. The latest stage of WSN development lasts 15% till present. With the rapid development of computing, micro-elecrtro mechanical system (MEMS) and other technologies, the sensors are becoming smaller in size and cheaper in price. These advancements provided WSN the opportunities for commercial use in many areas. Companies like Memsic and Crossbow Technology begin to produce wireless motes, sensors and software support. The standardization of protocols also becomes more and more matured. The standards like ZigBee (802.15.4) and 6LoWPAN are built and commonly used in WSN communications. An integration of WSN technology with MEMS makes the motes with enormously stumpy cost, miniatured size and least power. MEMS are the inertial sensors, pressure sensors, temperature sensors, humidity sensors, strain-gage sensors and various piezo and capacitive sensors for proximity. Over the last decade, the technology of Wireless Sensor Network (WSN) has been widely used in many real time applications and these miniaturized sensors can sense, process and communicate. Most wireless sensor nodes are capable of measuring temperature, acceleration, light, illumination, humidity; level of gases and chemical materials in the surrounding environment. Wireless sensor network aims to give co-ordination among the physical conditions and the internet globe.
Keywords: holy Quran text Analysis; Text Mining; Arabic Text Mining; 1N Form; 2Nform; Data Modeling; holy Quran Tafseer text
There is a huge shortage of scientific research in the Arabic language, especially in natural language processing and relationships between Arabic documents and other specific documents. This shortage is also reflected in Arabic Books' introduction, Topics abstraction, and content summary engines. Furthermore, there are some good samples of Arabic words inside the Quran. The Quran is the holy book of Islam, that is divided into chapters (surah) and verses (ayat) of differing lengths and topics. This paper introduces a framework for both specialized researchers in Islamic studies as well as non-specialized researchers to find hidden relationships between one of the most important chapters of the Holy Quran which is Al Fatiha surah and the remaining chapters of the Holy Quran using Hierarchical Technique data modeling as an unsupervised learning technique. a new framework that can access tokens of the Holy Quran in different granule parts such as chapter (sura) part of the chapter (Aya) of the Holy Quran Sura, words, word roots, Aya roots, and Aya meaning in the Arabic language, Moreover, We had developed a lot of statistics related to Fatiha Sura and the holy Quran like (roots distinct for every Sura, Words Redundancy, Roots Redundancy, Matrix report by roots and every sura, Matrix report shows Percentage of roots similarity by every sura and whole Quran distinct roots, etc.). Furthermore, we enhance the search engine results by adding search by roots and Aya meaning for every Sura. And the results for sample queries show accuracy with more than 3% using meaning and roots compared to the text of the Holy Quran only.
Keywords: GeoGebra; Teaching Mathematics; Learning Mathematics; Teachers’ training
The objective of the study is to identify the challenges and difficulties faced by Ecuadorian Teachers in the implementation of GeoGebra as a didactic resource for the teaching and learning of mathematics. The research is framed in a mixed exploratory sequence approach with a population of 832 teachers who were trained by the Ecuadorian Institute of GeoGebra (IEG) at the National University of Education in the period 2017-2020. The sampling was non-random conformed by N=144 teachers, who answered the online questionnaire. The instruments were the 32-item questionnaire grouped into three parts: (1) sociodemographic aspect, (2) difficulties in implementing GeoGebra in the classroom, (3) challenges of implementing GeoGebra rated on a Likert scale, and (4) open-ended questions about the advantages and disadvantages of using GeoGebra in the teaching and learning of mathematics. Quantitative and qualitative data wereanalyzed with SPSS and Atlas.ti. The results show that the difficulties that teachers have faced in the implementation of GeoGebra in the classroom is the digital gap existing in the 21st century through lack of access to technological equipment (87.5%), followed by the lack of training in the use of GeoGebra (79.1%). The advantages of using GeoGebra respond to the potential of using GeoGebra in the teaching and learning of mathematics, which are linked to the dynamic, innovative, interactive and user-friendly tool. The objective of the study is to identify the challenges and difficulties faced by Ecuadorian.
.
In essence, this article introduces a digital platform (see Figure 1) that can be located on the Earth’s surface, or anywhere above or below it. It provides a local project “workspace” that can be tied precisely to the Earth-Centered Earth-Fixed (ECEF) coordinate system of an ellipsoid. It is, in nature, like the old Polar Projection, with three differences:
References
We are living in an automated world where technical advancements are taking place and devices are connected to cyber. There has been a technical evolution in internet of things, image processing, and machine learning. Drastic change in the systems to achieve the accurate results is the trend. Evolution of Technical advancements has led to changes in education system. Considerable advancements have to be done on the Attendance marking in a classroom during a lecture is not only a onerous task but also a time consuming one at that. Proxy attendance(s) has been increased because of increase in the number of students. The traditional methods aren’t efficient way of marking the accurate attendance results, hence an advanced face recognition using the Artificial Intelligence is introduced in this research work. In recent years, the problem of automatic attendance marking has been widely addressed using standard biometrics like fingerprint and Radio frequency Identification tags etc., However, these techniques lack the element of reliability. The attendance system is a typical example of this transition, starting from the traditional signature on a paper sheet to face recognition. This paper proposes a method of developing a comprehensive embedded class attendance system using facial recognition with controlling the door access. In this proposed work an automated attendance marking, and management system is proposed by making use of face detection and recognition algorithms. Instead of using the conventional methods, this proposed system aims to develop an automated system that records the student’s attendance by using facial recognition technology.
Keywords: Advancements; Artificial Intelligence; Biometrics; Convolution Methods; Cyber; Face recognition technology; Machine learning
References
Water quality can be affected by either natural or anthropogenic factors. In this study, Water quality indices of Kofa Dam were determined for drinking and irrigation purposes Water samples were analyzed for the selected physicochemical parameter (pH, Electrical conductivity, Chloride, Total dissolve solids, Salinity, Sulphate, Sodium, Nitrate, Calcium, Temperature, Turbidity and Bicarbonate). The results from the analysis were compared with the drinking and irrigation standards of NSDWQ, FAO and WHO. Also, the results in relation to the standards were used to compute indices for drinking and irrigation purposes using the weighted arithmetic index. The results of the physicochemical parameter from each of the sampling locations of Kofa Dam and the overall score show that the Dam failed the index. Therefore, the water is not suitable for drinking purpose. However, for irrigation purposes, the results of the indices show a need for caution in the usage of water. This suggests that anthropogenic activities such as farming around the Dam and the presence of residential houses which discharge their effluent into the Dam are already becoming a source of threat to the reservoir. Hence, there is a need for regulation of activities around the dam to prevent further deterioration of water.
Keyword: Drinking water; Indices; Irrigation; Kofa Dam; Suleja; Water quality; Nigeria
.
Pattern recognition is the categorization and classification of specific patterns based on predefined characteristics from sets of available data. Implementation of many human skills such as face recognition, speech recognition, reading handwritten letters with very high stability to noise and different environmental conditions (like what exists in humans) by machines is one of the problems and issues that have been the focus of researchers in various engineering fields such as artificial intelligence and machine vision in the last few decades. Pattern recognition has many applications in various fields of science, including electrical engineering (medicine, computer and telecommunications), biology, machine vision, economics and psychology. Among the applications, we can mention things such as: recognition of voice, face, handwriting, fingerprint and signature, automatic disease detection from medical data (signal or image), detection of DNA strands, industrial automation and remote sensing. Pattern recognition, in short, deals with the problem of clustering and classification supervised and unsupervised and includes a wide range of statistical classical methods, intelligent algorithms, neural networks and fuzzy logic. In this regard, I recommend the book “Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron, Second Edition.
References
In this work, hybrid composites were produced from Borassus flabellifer leaf (BFLF) and Kevlar fibers in Epoxy matrix to establish their effect on physical and mechanical properties of the composites. Hand lay-up technique followed by compression was used to produce hybrid composites samples of different combinations of BLFL and Kevlar fiber. The mechanical and physical analyses were done using associated ASTM standards. The Results revealed that 2K10B had the lowest density of 1.023g/cm3, about 22.91% lower compared to neat sample. The 6K6B sample had the preeminent properties; 53.14, 50.5, 14.8 and 48.54% water absorption, TS, flexural and impact strength respectively higher as compared with samples of 100% K while 11.1% and 44.27% less for elongation and elastic modulus respectively.
Keywords: BorassusFlabellifer Leaf Fiber (BFLF); Tensile Strength (TS); Water Absorption; Impact Strength; Flexural; Hybridization
References
The study constructed an estimation of the significance of driving factors that influence artificial intelligence (AI) adoption and implementation in the public sector, and accentuated a critical research area that is currently understudied. A theoretical framework, underpinned by the diffusion of innovation (DOI) theory, was developed from a mingle of the technology, organization, and environment (TOE) framework and the human, organisation, and technology (HOT) fit model. The best-worst method was used to scrutinize and rank the identified driving factors according to their weighted averages. The findings of the study pointed to privacy and security; reliability, serviceability and functionality; regulation; interpretability and ease of use; IT infrastructure and data; and ethical issues as the highest ranked driving factors for AI adoption and implementation in government institutions. The study has significant implications for policy makers and practitioners, as it would augment their perspectives on how to adopt and implement AI innovations.
Keywords: privacy and security; innovation; artificial intelligence; government; technology; best-worst method
References
With the increasing size and complexity of high-power digital transmitters in the broadcasting network, their fault diagnosis has become cumbersome and time-consuming process. Meanwhile the breakdown transmission is not tolerated. These transmitters located in different parts of the country, are required to provide uninterrupted service round the clock. Sometime they develop faults which are complex in nature and call for services of the transmitter expert. However, the experts may not available all the time, and the fault diagnosis takes long time even for experts using manual techniques, it is highly desirable to provide a computer-based Fault Diagnosis Expert System.
References
Requirement engineering has taken the attention in both academic and industries, as today’s software’s expected to fulfil and provide highly customers’ centralize functionality and qualities. Requirement elicitation is the main and major step of any software project development life. It has direct impact on development lifecycle of any software. The incomplete or ambiguous requirement create confusion for the stakeholders. This is the step which leads the project to the success or fulfills the desire of user or it is the step or reason which may leads the project to the downfall. In requirement elicitation process there are many factors which effects on requirement elicitation. In this paper we are proposing the impact of Organizational factors on requirement engineering. Stakeholder needs include an initial, but continuous and critical phase in program development. This section of development is characterized by a high degree of error, influencing important factors based on communication problems. Software development is considered as a powerful process in which the needs of change seem to exist inevitable. Software updates are encouraged by all types of changes including changes in requirements. These types of changes cause internal flexibility, which has several implications for software development life cycle. In particular, the findings reveal that the instability of services is at its highest significant impact on time and cost skip to software projects. Our investigation too tested features that contribute to the level of demand flexibility and I found that flexible as common communication between users and developers and the application of a clear approach to needs analysis and modeling contribute to the stability of requirements. The main purpose of this survey is to look for the requirement elicitation process from the intra and extra factors perspective. The quality of services is critical to the success of the project. Negotiation needs, however these are not an easy task. Vision, psychological model and differences in expectations between users and analysts do this work is difficult and controversial. In most cases, clients are completely convinced of their real needs. In others, the current operating procedure does not meet the expectations of management. In this paper we have search the problems, factors of elicitation process.
Keywords: Impact of organizational factors RE; Requirement elicitation; intra organization
References
This Century will see an urgent up-scaling in the global production of copper nanoparticles due to the continuous industrial development and mounting need to address pressing global issues including non-pharmaceutical disease management and climate change. Mining, benefaction, refining, reagent synthesis, and finally nanoproduct synthesis are the typical linear multistage steps in the traditional copper nanoparticle synthesis process, which is energy and resource intensive. The utilization of nanometer-scale zerovalent iron particles as a reducing agent for environmentally friendly copper nanoparticle manufacturing from waste copper dust is discussed in this paper. Based on the revision, it is clear that this method has significant potential and could represent a completely new paradigm for the conversion of low-grade Cu bearing waste (such as waste copper dust) into useful nanoparticulate Cu compounds for a variety of industrial applications.
Keywords: waste copper dust; cementation; chemical reduction; hydrometallurgy; valorization
In vapour compression refrigeration system, refrigeration is obtained by giving electric power to compressor and mainly liquid type of refrigerant is used. In vapour absorption refrigeration system, refrigeration is obtained by supplying heat energy to generator (heat receiving device from external source and does the work in place of compressor in compression refrigeration system).It has both refrigerant(liquid type) and absorbent(solid type). In gas cycle refrigeration system, gas type refrigerant is used. Ejector type refrigeration system is a modified form of vapour compression refrigeration system where ejector is used in place of compressor. Here mainly water as refrigerant is used. Lastly thermoelectric refrigeration system is there which works on thermoelectric effect (evolution of heat at one junction and absorption of heat at other junction after passing electric current through a circuit of a thermocouple). Thermocouple is an electrical device which consists of two dissimilar electrical conductors thus forming an electrical junction.
The COP (coefficient of performance) i.e. performance index of refrigeration are different for different refrigeration types. COP of compression refrigeration system is always more than 1 and higher than absorption system. COP of gas cycle is higher than compression refrigeration system. COP of ejector type is upto 0.3 and for thermoelectric refrigeration system typically between 0.3 to 0.7.
Keywords: Absorption; Compression; Ejector; Gas; Refrigeration; Temperature; Thermoelectric
Refrigeration is an art of maintaining a system/object at a temperature lower than ambient temperature or surroundings. Refrigeration is used in every day to day life. Foodstuffs are stored at congenial temperature for preventing them from spoilage. The spoilage is reduced as micro-organisms are inactive at lower temperature and thus spoilage is reduced. In medicinal field, different medicines are stored at refrigerated temperature preventing them from spoilage. Also refrigeration is utilized to maintain a given volume of space at comfortable temperature. During summer the required space is maintained at lower temperature than ambient temperature for comfort. While during winter the required space is maintained at higher temperature than ambient temperature for comfort. For achieving refrigeration different types of refrigeration are available.
References
In Duluth, the largest demographic living in poverty is 18-24-year-olds. Drivers within this age range are also over-represented in crash statistics in the state of Minnesota. Further, owning and operating a personal vehicle can be costly, especially for young drivers with no stable or high income. Sustainable commute modes include commuting with low impact on the environment, transporting more than one passenger, or replacing fossil fuels with green energy. Behavioral changes are necessary to get the maximum benefits from sustainable commuting such as encouraging the use of alternative modes of transportation like the public transportation system.
Although the benefits of sustainable commuting include saving money, being eco-friendly, and having a positive social impact on society, a survey of 370 18-24-year-old drivers found that 46% choose their vehicle as their primary commuting option. This research explores the perception of young drivers in Duluth toward the use of public transportation. Based on the factors from the Theory of Planned Behavior (TPB), the study shows that even if their attitude was favorable and there existed a strong social structure, within Duluth, toward using the bus, control factors exist that impede their decision to use the bus. If these factors are not addressed, then ridership will continue to be low.
Keywords: travel behavior; young drivers; sustainable commuting; public transportation
References
Distracted driving has become a serious tra?ic problem. This study proposes an image process- ing and multi-model fusion scheme to maximize the discrimination accuracy of distracted driving. First, the training dataset and the test dataset are processed to specific specifications by translation and clipping. Second, we set vgg16 as the benchmark model for evaluation, and train ResNet50, InceptionV3 and Xception model input images. Finally, considering that each model has its own advantages, we use frozen part of the network layer to fine-tune the model, remove the weights of each single model Fine-tune from the output before full connection, connect them in series, and then calculate each model weights through neural network training.
References
Face recognition is an important application in computer vision and biometrics. In this paper, we propose a novel approach to face recognition based on modular PCA (Principal Component Analysis). The proposed method improves the accuracy and efficiency of face recognition by dividing the face image into multiple overlapping sub-blocks, and then applying PCA to each sub-block independently. The resulting sub-block eigenfaces are then combined to form a composite face feature vector, which is used for face identification. Experimental results on several standard face recognition datasets demonstrate that our approach outperforms other state-of-the-art methods in terms of recognition accuracy and computational efficiency. The proposed method is also shown to be robust to variations in lighting, facial expressions, and occlusion.
Keywords: modular Analysis; principal component analysis; face recognition; modular PCA; pattern recognition
References
This paper constructs a kind of communication model with forced treatment of online game addicts. First, the regeneration matrix method is used to determine the basic regeneration number of the model, and then the local stability of online game model and online game addiction transmission conditions is judged according to the basic regeneration number. Further, the effect of treatment delay on online game addicts is analyzed. And last, with the help of numerical simulation to verify the stability of the equilibrium point, the author of this paper puts forward effective treatment cycle for online game addicts.
Keywords: online game addictive equilibrium; online game non-addictive equilibrium; time delay; stability
Reference
The shape of the radiating patch exerts influence greatly on the performance of the microstrip patch. The vital target of this research article is to constitute the microstrip antennas of Circular and Pentagonal patches on FR4 substrate employing partial ground that notably lower the shape complication as a consequence of the antenna size reduction combined with a profitable radiation efficiency on the basis of the center frequency of 8.5 GHz. The specification of return loss, VSWR, gain, directivity, radiation pattern, and the radiation efficiency of the antenna are investigated using ANSYS HFSS 13.0 software which reveals beneficial outcomes of dual - band over the entire band frequency range of 5-15 GHz relating with the earlier analysis. The triangular patch antenna is detected far well compared to the alternatives which is achieved the best gain of 2.2 dB, 6.8 dB, minimum return loss of 19.0144 dB, 34.5612 dB, enhanced bandwidth of 0.50 GHz, 1.15 GHz, and the higher radiation efficiency of 81%, 94% at 9 GHz, 12.34 GHz respectively. This is applicable for Marine Radar communication (SART), productive Satellite Communications, and also it is the best candidate for weather monitoring, defense and military purposes in Wireless Communication applications.
Keywords: Microstrip patch antenna; Circular & Pentagonal shape; FR4 Epoxy; S11 parameter VSWR; bandwidth; gain; directivity; radiation pattern and radiation efficiency
References
This research paper bestows a rectangular slot-loaded microstrip antenna in possession of DGS, which may give multi-band and the highest gain for C-band, X-band, and Ku-band applications. The distinct antenna structure is simulated through HFSS 13.0 simulation software for the illustrative investigation of enhanced gain. The newborn evolution of the advancement of wireless internet means of approach is concerned with the insistence on multi-band antennas. The investigation of nine-band frequency antennas is an achiever of 6.0350 GHz (Global Navigation Satellite System (GNSS)), 6.7950 GHz (Satellite TV), 7.46 GHz (Long-distance Communication), 8.5050 GHz (Terrestrial Broadcast Radar), 11.45 GHz (Space Communication), 13.35 GHz (detector), 15.06 GHz (Military), 17.24 GHz (Aerospace), and 19.05 GHz (Astronomical observation). The multi- band aspect is accomplished by novel U-slot cuttings and rectangular slots in the antenna. It resonates at quad- band without any patch or ground modification; when a U- slot cutting is made at the left and top of a patch, it resonates at six bands; and when rectangular slot cuttings are united at the ground plane, it resonates at nine bands. The slot length, width, and position are optimized to attain the highest gain. The achieved gains of the proposed antenna are 1.5865 dB, 1.1344 dB, 1.0416 dB, 0.92179 dB, 3.7586 dB, 6.2776 dB, 5.1998 dB, 14.679 dB, 5.4279 dB, 6.0350 GHz, 6.7950 GHz, 7.4600 GHz, 8.5050 GHz, 11.45 GHz, 13.3500 GHz, 15.0600 GHz, 17.2450 GHz, and 19.0500 GHz, In wireless communication, a successful multi-band antenna is advantageous.
Keywords: Multiband Microstrip antenna; Slot cuttings techniques; DGS; C; X and Ku band
.
The I.S. of units implies a damaging separation between engineers and theoreticians. In particular, the electrical units are useless. Thus, in cosmology, we must first calculate what distance corresponds to three universal constants excluding the speed of light c, too slow to ensure cosmic coherence. In the standard triplet G, c, ħ, which defines mass, length and Planck time (the last two constituting the "Planck wall"), the first choice is to replace c with the average mass of the 3 main particles of physics atomic (electron, proton, neutron), and calculate a length, since it is what is really measured in the Hubble-Lemaître law. This gives the half of 13.8 billion light-years (see French wikipedia, “analyse dimensionnelle”). This invariable length, calculated in the first three minutes of the sabbatical year 1997 (Univ. Paris 11), is now identified (2023) not only with the Hubble radius, directly measured, but also with the product by c of the time characteristic of standard cosmology, which cannot therefore be considered no more as the "age" of the Universe, but as the time constant of exponential recession, the unique parameter of steady-state cosmology, so which has also predicted the recession acceleration. This "3-minutes formula" implies that the critical cosmic condition, observed with surprise around 2000, does not need any inflation, it corresponds to a very simple relationship. It is the equalization of the diametral area of the visible Universe compared to the Planck area (the entropy of Bekenstein-Hawking of the visible Universe considered as a black-hole) with the perimeter of the Universe, relative to its wavelength, which pushes back the "wall of Planck" by a factor of 1061, enlighting the vacuum energy enigma. By separating the electron from the proton-neutron couple, another relation of the same "holographic" type extends to the wavelength of the thermal background, with a precision which excludes any hazardous numerology. The great quanto-gravitational unification, vainly sought for a century, is therefore within everyone's reach, and is deposited in a sealed envelope at the Paris Academy of Sciences (March 1998). We have thus predicted that the far field Universe must be identical to the near domain, which is indeed indicated by the first observations of the JWST. This marvel of technology thus consecrates the triumph of physical common sense and the talent of engineers over the theoreticians drowned in their formalism who imposed this illusion of the Expansion of the Universe and the Initial Big Bang. The JWST must know observe the Universe isothermy (http://holophysique.free.fr).
References
Every country's financial system relies on the banking sector. It has an impact on the economy of the country by providing loans, infrastructure, and investment. The banking sector is critical to any country's growth and expansion. Hence this study focuses on the performance assessment of the new-generation South Indian Bank. The financial performance analysis of South Indian Bank involves an evaluation of the Bank's financial health, profitability, and efficiency. This analysis provides valuable insights into the Bank's overall performance and ability to generate sustainable returns for its stakeholders. The Bank's financial performance is evaluated using various financial ratios and indicators, such as Credit- deposit ratio, Investment-deposit Ratio, Cash- deposit ratio, Cost- income ratio, Deposit- cost ratio, Yield on advance ratio, Yield on investments ratio, fixed assets to net-worth Ratio, and other ratios. These ratios are calculated by analyzing the Bank's income statement and balance sheet. The analysis reveals that South Indian Bank has maintained a stable financial position recently, with steady net interest income and profitability growth. The Bank's asset quality has also improved over time, with a reduction in non-performing assets and an increase in the provision coverage ratio. The financial performance analysis of South Indian Bank suggests that the Bank has a stable financial position. Still, it needs to address specific areas of concern to sustain its growth and profitability in the future.
Keywords: Financial performance analysis; Non-Performing Asset; Ratios; Private Sector Bank
References
This work is carried out to develop a system to maintain silos in a safe and hygienic environment to store food grains by monitoring them at regular intervals. We keep track of the parameters in the silo such as temperature, humidity, and carbon-di-oxide level concentrations. Values recorded at regular intervals of time help us analyze and visualize the data and observe the effects of these parameters on the food grains stored, while providing protection against pests, rodents and other organisms that can affect the yield. It has also been observed that while the grains are stored in such silos, due to the natural processes we have a lot of carbon-di-oxide concentration in the silos which has caused many fatal deaths of farmers. This issue has been mitigated in this work by implementing an emergency procedure which will be activated to open the grain bin and release the toxic gases from the silo. Through this work, we believe we can innovate in the grain storage space and solve the problems faced by the farmers.
Keywords: IOT; Food; Grains; ESP32; Google Apps Script; Cloud
References
In the IoT and WSN period, large number of connected objects and seeing bias are devoted to collect, transfer, and induce a huge quantum of data for a wide variety of fields and operations. To effectively run these complex networks of connected objects, there are several challenges like topology changes, link failures, memory constraints, interoperability, network traffic, content, scalability, network operation, security, and sequestration to name a many. therefore, to overcome these challenges and exploiting them to support this technological outbreak would be one of the most pivotal tasks of ultramodern world. In the recent times, the development of Artificial Intelligence (AI) led to the emergence of Machine Learning (ML) which has come the crucial enabler to figure out results and literacy models in an attempt to enhance the QoS parameters of IoT and WSNs. By learning from once gests, ML ways aim to resolve issues in the WSN and IoT’s fields by erecting algorithmic models. In this paper, we're going to punctuate the most abecedarian generalities of ML orders and Algorithms. We start by furnishing a thorough overview of the WSN and IoT’s technologies. We also bandy the vital part of ML ways in driving up the elaboration of these technologies. also, as the crucial donation of this paper, a new taxonomy of ML algorithms is handed. We also epitomize the major operations and exploration challenges that abused ML ways in the WSN and IoT. ultimately, we dissect the critical issues and list some unborn exploration directions.
Keywords: Wireless Sensor Network; Internet of Things; Machine learning categories; Machine Learning Algorithms
References
The goal of this research is to enhance the bandwidth of a rectangular patch microstrip antenna. The bandwidth of the antenna is tuned by analyzing the various dielectric substrate mat erials, height & width, coaxial feed line, input and output impedances with effective dielectric constant s, and effective length. The suggested rectangular microstrip patch antenna operates at a frequency of 4.4 GHz. As a function of various frequencies, the VSWR, S11 and efficiency are analyzed. This analysis emphasizes that a low dielectric constant with an appropriate height of the dielectric substrate material for a microstrip patch antenna is utterly important in terms of enhancing bandwidth as well as a surface wave. This analysis gives better results for the RT-Turoid and attains a bandwidth of 56.65% at 4.4 GHz of resonance frequency. This high bandwidth makes it useful in many wideband applications.
Keywords: Dielectric substrates; Rectangular patch; Bandwidth; Coaxial Feedline; S11 parameter; VSWR and Efficiency
On the other side, when sensitive information is stored on a cloud server, which is not in a direct control of end user, then the risk for the information is increased dramatically. Many unauthorised users may try to intercept secure data to compromise data centre server. Therefore in cloud communication, cloud provider will help to provide complete security measures for user to user communication but at the end cloud protection is not a network protection. Network level threads can compromise the security of information, so to provide the protection of information from intruders, security shield at the different level of cloud network is required.
We need three layers of security shield in cloud based network (1) Connectivity Level (2) Storage Level (3) Application Level. Cloud based protocol provide a broad set of policies and technologies to control these attack.
As the information move between the communication channels, end to end encryption and authentication with no data leakage is mostly needed. During this transmission to protect our cloud network, Host Identity Protocol is used for authenticate IPv4, IPv6 client server cloud network from the intruders. MQTT and HTTP Protocol provide a core support for device connection and communication. In general MQTT is supported by embedded devices and for machine to machine interaction. In HTTP protocol based devices do not maintain a connection to IoT cloud, however it maintain a half duplex TCP connection where transmission is connectionless.
At this level some standardized protocol like connectionless network protocol, Secure Shell protocol, Spanning Tree Protocol, Equal Cost Multi-Pathing Protocol and many more are used. In these protocols some are communication protocols which are used for message failure detection, message monitoring and data unit identification.
Since the cloud is a very vast storage and it deals with a large amount of data information, the service provider should provide a separate address space to each customer with their individual memory space. This virtual isolation is provided with the help of dedicated virtual machines. Some time cloud deploys firewalls to protect the data and to overcome it Session Initiation Protocol (SIP) is used on VoIP based communication. This protocol helps the cloud for protecting their network by attacks like Denial of service, IP traffic management, Toll fraud protection and encryption of data.
In these days, Equal Cost Multi-Pathing protocol is widely adopted by the cloud computing because it has the ability to create multiple load balanced paths which play a very important role for providing variable bandwidths depending upon the requirement of the application. Moreover, Extensible messaging and presence protocol (XMPP) can be used for public subscribe system and file transfer.
To conclude, there is no doubt that Cloud computing is the latest field in communication and for technology friendly users which promises immense benefits. Most of the Information technology giants like IBM, Cisco, Google and Microsoft have adopted it and continuously working in this area to handle security and privacy issues. It is expected that the use of cloud computing would exponentially increases in the upcoming days and simultaneously we all will face new challenges in cloud security. Hacking and various attacks to cloud infrastructure and cloud network would affect multiple clients even if only one site or one machine is attacked. These risks can be mitigated by using most secure protocols, security applications, most dynamic encrypted file systems, minimum data leakage and recovery software, and buying security hardware to track unusual behaviour across servers. However, there a lot of research work by the experts is still required in cloud area because many of the concerns related to security and privacy issues are not been answered yet.
Dr. Varun Prakash Saxena (BE, ME, PhD) is presently Assistant Professor, Department of Computer Engineering, Government Women Engg. College Ajmer since 2012. He has 10 year of teaching and research experience. He has published more than 18 research paper and 5 research article in national/International journal of repute. He is a member of 5 national/ international professional societies. His interest includes cryptography, Networks and programming. He is also guiding more than 10 PG students.
Over the past decade, cloud computing is the most emerged fast growing and widely accepted concept for information exchange. This is smartly designed for information exchange paradigm which builds around the core concepts of data encryption, data transmission, media transmission and communication with different remote login.
In highly programmable and high performance cloud based network, the central remote server or data centres are used for the storage of information in secure manner. They monitor everything and help the end user to retrieve the information without breaking its integrity, confidentiality and access controllability.
This task to manage and maintain the security at highest level in cloud is, inherently challenging. These cloud network are operated by many security level protocols to provide the efficient and secure service to end user who often lack of technical knowledge about the security mechanism and threats in a network.
References
Delay is a major Quality of Service (QoS) metric in Mission Critical Applications and some of these include health, vehicle and inspection safety applications. Some of such applications run on Mobile Ad Hoc Network (MANET) set ups which comes with transmission challenges arising from the size of traffic packets, environmental conditions and others. These challenges cause transmission delays, packet loss and hence a degraded network performance. In this article we study a Low Latency Queueing (LLQ) Scheduling Algorithm that makes use of three priority queues each transmitting voice, video and text packets. For the purposes of improving delay performance piggy backing off video packets on voice transmission is used. The LLQ model is developed under two scenarios as follows: (I) when voice packet is delayed once and piggybacked with video on transmission. (II) when voice packet is delayed only if there is a partial video packet being transmitted. During scheduling of traffic voice packets are combined with the partial video packet. We investigate the performance variation of the LLQ in an M/G/1 queue under different scenarios and under two service distributions namely: Exponential and Bounded Pareto (BP). The numerical results for the first scenario revealed that the video packets experienced the least conditional mean response time/conditional mean slowdown, followed by voice and least were text packets under LLQ Algorithm. While for second scenario, it was observed that voice packets experienced the least conditional mean response time/conditional mean slowdown, followed by video packets and then text packets in that order under LLQ Algorithm.
Keywords: delay; video; voice; text
References
An efficient, functional banking system may catalyze rapid growth in numerous areas of the economy and is a critical prerequisite for a country's progress, and India is no exception. This research study will investigate and compare the financial performance of South Indian Bank and HDFC Bank using the CAMEL Model. The purpose of this article is twofold: first, to examine South Indian Bank's financial performance, and second, to compare South Indian Bank's performance to that of HDFC Bank. The study utilizes secondary data from the banks' annual reports over five years, from 2017-18 to 2021-22. The CAMEL Model is used as a framework for the analysis, comprising five dimensions - capital adequacy, asset quality, management quality, earnings, and liquidity. According to the study's findings, both banks performed satisfactorily regarding capital sufficiency, managerial quality, and liquidity. However, regarding asset quality and earnings, HDFC Bank outperforms South Indian Bank. This research will help investors, stakeholders, and policymakers make informed judgments concerning these banks.
Keywords: Capital Adequacy; Asset Quality; Management Efficiency; Earning Capacity; Liquidity; Non-Performing Asset
References
Pestology Combines is a manufacturing company that specializes in producing hygiene products and partners with renowned brands. The company emphasizes integrity, innovation, and customer satisfaction while promoting eco-friendly pest elimination solutions. With a state-of-the-art manufacturing unit, Pestology Combines ensures the production of high-quality defect-free products by utilizing top-notch raw materials and professional management. Additionally, the company exports its products to several countries and serves as an OEM supplier, offering customized B2B solutions. This study evaluates Pestology Combines' financial performance using four years of balance sheet and income statement data. By analyzing the changes in assets and liabilities over time, the research provides management strategies and problem-solving recommendations. Furthermore, the study explores the manufacturing sector's significance, technological advancements, and its contribution to economic growth and innovation. The findings of this research contribute to a better understanding of financial statement analysis within specific industry contexts, highlighting the importance of managing assets and liabilities effectively for sustained financial health.
Keyword: Financial performance analysis; Financial performance analysis; Balance sheet and income statement analysis; Manufacturing company; Hygiene products
References
The research paper focuses on the increasing demand for bakery products and the growth of the bakery industry, with a particular emphasis on Sheen Confectionery in India. The study utilizes comparative statements to analyze the financial position and income position of Sheen Confectionery. The objectives of the study are to assess the company's assets and liabilities, provide recommendations for improvement, and understand the changes in the company's financial performance over time. The paper acknowledges limitations in terms of the time frame and the use of a single analytical tool. The findings highlight trends and fluctuations in various financial aspects of Sheen Confectionery, and based on the analysis, recommendations are provided to enhance the company's financial position and profitability.
Keywords: Bakery business, Sheen Confectionery, Financial statement, Comparative analysis, Assets and liabilities, Financial performance
References
A wide variety of destructions and deformations of the pavement, such as rutting, waves, fatigue cracks, is shown by the experience of operating roads with asphalt concrete pavements. The actual service life of road structures is often much lower than the standard. The service life of asphalt concrete pavements of roads in Ukraine is 5-7 years instead of 12 before overhaul.
Thus, a decrease in the service life generally leads to a deterioration in the transport and operational condition of the road network.
Bitumen is one of the materials most susceptible to changes in asphalt mixes.
In the world practice of road construction, the opinion is firmly established that a radical way to improve the quality of bitumen is their modification with polymers. However, the introduction of polymer additives into petroleum bitumen is not always able to provide the required performance of asphalt concrete in terms of shear resistance at high positive temperatures, crack resistance at low temperatures, and fatigue life when exposed to long-term dynamic loads.
The compositions and technologies for the production of technological complex-modified road asphalt mixtures for the construction of non-rigid pavements of increased durability based on basalt fiber have been developed at the National Aviation University.
Keywords: asphalt concrete; dispersed reinforcement; basalt fiber; crack resistance; shear resistance
It is known that phases are formed in the Fe-N system: α-phases (nitrogenous austenite with a face-centered cubic lattice) [1, 2]. At a temperature of 590°C, the α-phase represents the eutectoid decomposition γ→α+ γ'. When supercooling the γ-phase by the shear mechanism, it represents a martensitic transformation. Nitrogenous martensite (α ' - phase) has a tetragonal body-centered lattice [2, 3]. A separate γ'-phase is a solid solution of Fe and N nitride, the ε-phase is also formed during nitriding and is a solid solution of Fe3N. In the Fe-C-N system in steel, the main strengthening phase is ε-carbonitride Fe2-3(N-C) formed during nitriding, the carbonitride phase obtained by simultaneous diffusion of carbon and nitrogen into steel has high hardness and high wear resistance. In the γ-phase, carbon practically does not dissolve, and the γ-phase is a solution of nitrogen and carbon intercalation. According to [4]. The introduction of nitrogen into the cementite lattice facilitates the formation of the carbonitride phase. Carbonitride with a cementite lattice is formed in the process of nitrocarburizing at a temperature of 680°C. When steel is nitrided, cementite, after saturation with nitrogen atoms, turns into ε-carbonitride. It has been established in the studies that during the nitriding of alloyed steels, phase points are formed, which are also formed during the nitriding of iron, only during alloying does the composition of the phases and the temperature intervals of their formation change. Studies have shown that in alloyed steel, due to the nitrogen content in the ε-phase, the hardness is increased to HRC 63.
Following the nitride zone during nitriding in steels, there is a layer of the α-phase, which is the main part of the diffusion layer. Refractory alloying elements increase the solubility of nitrogen in the α-phase. During nitriding, the mosaic blocks are also refined and the α-phase lattice is distorted. The overall change in the defectiveness of the crystal structure of the α-phase depends on the nitriding temperature. When nitriding in the region of 500-600°C, the α-phase zone provides ferrite grains, and at a higher temperature of more than 600°C, a darkly etched zone is formed, also consisting of ferrite grains. Moreover, the darkening of ferrite grains increases with an increase in the content of alloying elements. During slow cooling after nitriding in steel, a γ'-phase of an acicular character is released from the α-phase. All alloying elements to some extent reduce the diffusion coefficient of nitrogen in the α-phase and, accordingly, reduce its depth.
The structure of the nitrided layer is formed not only at saturation temperature but also during subsequent cooling. During cooling of the nitrided layer, the α-solid solution decomposes. This decay is strongly influenced by the rate of cooling. If there is a slow cooling, then simultaneously granular needle-like nitrides are formed that are released from the α-phase. The properties of the nitrided layer are determined by the structure that was formed during the saturation of the steel with nitrogen and subsequent transformations occurring during cooling. Two phases have high hardness: γ'-phase and nitrogenous martensite α'-phase. All alloying elements reduce the thickness of the nitrided layer but significantly increase the hardness of the steel surface. It was found that the high hardness of the nitrided layer is obtained by separating dispersed nitride alloying elements from solid solutions, which distort the α-phase lattice and serve as an obstacle to the movement of dislocations.
Moreover, the greatest increase in hardness corresponds to the nitriding temperatures at which nitrides are actively formed. The nitride layer forms strong elastic distortions of the crystal lattice of the α-phase. These distortions prevent the movement of dislocations and contribute to the hardening mechanism of the steel. By changing the temperature and time of nitriding, it is possible to fix various stages of nitride precipitation in the diffusion zone and thus control the degree of steel hardening. When alloying steel with several elements, the degree of distortion of the diffusion layer is much higher than that of steel alloyed with one element. Therefore, complex alloy steels tend to have higher hardness than low alloy steels.
References
This article presents the results of a study of the processes of nitriding of alloyed steels. It is shown that to obtain a high hardness of the surface layer of steel, it is necessary to obtain dispersed nitrides of alloying elements in the surface layer of steel. The resulting nitrides have high hardness and are an obstacle to the movement of dislocations in the steel, thereby strengthening the surface layer of the steel.
Keywords: nitriding; nitrides; hardness; temperature; alloyed steels
References
Many web applications that rely on centralized databases face vulnerabilities to insider attacks. While these systems implement multiple layers of security measures against external hackers, they may overlook the threat posed by employees who are already within these security layers and have access to privileged information. Users with administrative privileges in the database system can potentially access, modify, or delete data, while also manipulating corresponding log entries to erase any evidence of tampering, making detection nearly impossible. While one approach could involve developing methods to detect and trace such attacks, along with recovering the original data, this report takes a different perspective. Instead of focusing on detection and recovery, we explore a new direction: ensuring that attacks do not occur in the first place. By establishing a system that comprehensively safeguards data integrity, the need for detection, tracing, and recovery can be minimized or eliminated. This report investigates the prevention of insider attacks on databases by utilizing Bluzelle, a NoSQL database that offers decentralized database solutions for decentralized applications.
Keywords: Tampering; Centralized Database Systems; Insider Attack; Detection; Recovery; Integrity; Bluzelle
References
Projection welding is a joining process based on the electrical resistance of metal components that have small protrusions on the areas where assembly is desired. In this case, the welding electrodes that ensure the transfer of the electric current have a flat geometry, which allows uniform pressing of the areas with protrusions, which have the role of ensuring the concentration of the current lines for the formation of welding points. Excessive deformation can cause the protrusions to collapse rapidly, causing greater dispersion of the current and limiting the effect of localized heating and melting, and welding no longer occurs. The process is widely used in the electrical, electronic, automotive, food and construction industries, etc. The paper analyzes the effect of regime parameter values during the resistance welding process on the joints made between 2 steel components with surfaces protected by aluminization. The surface of one of the components was imprinted with 5 equidistant protrusions created by plastic deformation to concentrate the electric current lines and achieve melting points with a limited surface area. The thin layer of aluminum deposited on the surfaces of the parts has the role of protection against corrosion, but it produces hard compounds located on the welding interface. The chemical micro-composition analyzes performed with the EDAX method highlighted the diffusion effects of chemical elements in the welding area and the formation of hard compounds Al-rich loccated on the fusion line of the welded points. The microstructure of the welded areas was analyzed and the fracture strength tests of some welded samples were performed to establish the most suitable values of the welding process for this application.
Keywords: projection welding; current; microstructure; tensile test; microhardness
References
This article focuses on the importance of expert validation in the construction and evaluation of questionnaires, particularly in the context of satisfaction surveys. The article highlights that the involvement of experts provides an overview and helps understand the limitations of the coefficients used, such as Aiken's V and Cronbach's Alpha, as well as their proper interpretation.
The text mentions that expert validation can also provide additional information about the interpretation of results and key concepts in the evaluation of measurement instruments. Furthermore, it points out that expert validation can be useful in assessing the agreement among experts' ratings in an impact study.
Next, the importance of Aiken's V and Cronbach's Alpha in evaluating the validity of data obtained through questionnaires is emphasized. Aiken's V assesses the convergent and discriminant validity of a questionnaire, while Cronbach's Alpha evaluates the internal consistency of scale items.
It is highlighted that obtaining positive results in both coefficients enhances the validity of data by demonstrating good convergence, discrimination, and consistency in measuring the target construct.
In conclusion, the article emphasizes that expert validation and the use of coefficients such as Aiken's V and Cronbach's Alpha are fundamental in improving the validity of data in satisfaction surveys and other measurement instruments. These tools provide valuable information to ensure the accuracy and reliability of the results.
Keywords: Continuous education; Professional development; Satisfaction survey; Content validity; Reliability
References
To assess the distribution of droplet sizes on a flat plate during dropwise condensation, a droplet detection technique has been devised. By integrating an equation for the single droplet heat transfer rate with the droplet size distribution, dropwise condensation heat transfer can be modelled. This chapter provides a thorough introduction to the condensation process. Both boiling and condensation take place throughout the thermodynamic heat transfer process. This article contains information about homogeneous and heterogeneous drop-wise condensation. The drop-wise condensation mechanism has been described with a diagram. The figure shows several contact angles and the resistance provided in dropwise condensation are recognized. The relevance of the condensation process and certain fundamental applications have been discussed recently.
References
Different kinds of images induce different kinds of stimuli in humans. Certain types of images tend to activate specific parts of our brain. Professional photographers use methods and techniques like rule of thirds, exposure, etc, to click an appealing photograph. Image aesthetic is a partially subjective topic as there are some aspects of the image that are more appealing to the person’s eye than the others, and the paper presents a novel technique to generate a typical score of the quality of an image by using the image captioning technique. The model for Image Captioning model has been trained using Convolutional Neural Network, Long Short Term Memory, Recurrent Neural Networks and Attention Layer. After the Image caption generation we made, a textual analysis is done using RNN- LSTM, embedding layer, LSTM layer, and sigmoid function and then the score of the image is predicted for its aesthetic quality.
Keywords: Image Aesthetic; Convolutional Neural Network; Long Short Term Memory; Recurrent Neural Networks; Attention Layer; Embedding Layer; Image Captioning
Dynamic of the electron
Let’s briefly describe the dynamic of an electron in the quantum computer. An electron can be in a three states in the quantum computer. It can be in a bounded state in an atom or molecule, it can be in a transitory state and it can move in a stabile state or pre-superconducting state. Concerning bounded state there is not much difference in the Confined Quantum Field Theory compared with the standard quantum. The only difference is that state functions have a cut-off and do not go to infinity. One can accept solution to the Schrödinger but on a bounded domain. The difference is mainly on the transitory and stable or pre-superconducting state as we call it in the Confined Quantum Field Theory. Transitory state is a state that the electron moves around and has a size or energy that do not match with the potentials around to be captured. In this state the electron frequently exchanges energy with the bulk until it finds energy or size and position to be in a bounded state or pre-superconducting state. Pre-superconducting state is a state which specifically recognized by Confined Quantum Field Theory in that an electron moves in a periodic potential and the size of the electron is a multiple of the periodicity. In hardware engineering avoiding impurity and defect is especially important just because they destroy the periodicity. We all are aware that computers works always better in a lower temperature. It is just because the temperature disturbs the potential periodicity. Since in the quantum computing we deal with single atom or molecule, if this atom or molecule is disturbs by high energy radiation can be a problem for the computing. Disturbance due to the high energy radiation like cosmic radiation is more difficult to prevent. For this one need a thick jacket of lead which is not practical for small mobile computer. One possible solution is to construct several parallel computing lines. The probability that two lines disturbs in a same way by a high energy radiation at the same time is almost zero.
Today Quantum Computing is central in the advance engineering. This is because there is more demand for faster and more effective computer to treat huge amount of information in a very short amount of time. From engineering point of view this means that we must go in to the atomic level in the constructing computer. This is in fact what Quantum Computing is about. For this one should have good understanding about electrons dynamic behavior. This in turn demands a good understanding of the quantum. Quantum that we teach student now is limited by uncertainty and paradox. First it is not transparent. One may not be interested in transparency and only be interested in the result of the calculation. Even if one is directed to the result, we should say that the calculation is often complicated and confronted with ambiguity. We must say that we can never separate transparency from the other aspects of the calculation. It is through the transparency that we can find the best way of the calculation. Here we come to the Confined Quantum Field Theory. In confined Quantum Field Theory we have no uncertainty and paradox. Each quantum object like electrons are represented by a well defined bounded and connected manifold with well defined size. This size is a function of the quantum objects energy. Therefore we often refer to the energy as the size. Quantum objects with higher energy have smaller size. More in the confined quantum field theory quantum objects have well defined position. Well defined size and position is the transparency we receive from the Confined Quantum Field Theory.
References
South Africa, like many other developing nations, faces significant challenges in delivering effective and fair public services. Africa in particular suffers from a catastrophic shortage of public infrastructure, and a variety of factors contribute to the infrastructure deficit. Public entities around the world are battling with effective service delivery and have adopted different models to enhance and improve infrastructure delivery. However, the models currently deployed have shortcomings, thus frustrating the efforts to deliver infrastructure effectively to the general populace. South Africa has similarly had its fair share of false starts. The 2010 introduction of Infrastructure Delivery Management System (IDMS) was specifically to facilitate effective, timely and sustained infrastructure development, and tackle the challenges in public sector infrastructure delivery. The study employs a multi-case study, qualitative approach through content analysed data to look at four nations that implements infrastructure projects in Europe and Sub-Saharan Africa and analyze the advancement of infrastructure delivery. A systematic review of infrastructure delivery models/reforms in the context of public sector was carried out through literature and descriptive analysis was applied. The findings reveal a knowledge vacuum about the diverse techniques taken by various countries in the execution of public sector infrastructure projects, and provide little precise evidence on the performance of delivery systems and lessons learned. It is here recommended that interventions such as IDMS should be contextualized cognizant of the country’s developmental imperatives.
Keywords: reforms; infrastructure; delivery; construction industry; public sector
References
Smart building management systems (SBMS) are becoming increasingly popular due to their ability to improve the energy efficiency and sustainability of buildings. This research uses Lora technology- the latest trend to deliver a secure wireless SBMS using very little power. LoRa devices involve incorporating wireless communication technology into building management systems to increase power efficiency, ensure safety by earlier earthquake and fire detection, providing safety instructions, escape routes to the occupants and alert to the local emergency response team, security to the end user, automation and monitoring capabilities. SBMS also provides advanced automation capabilities, such as automated lighting, heating, and cooling systems. These systems adjust in real-time based on the occupancy levels and other building parameters, providing optimal comfort to occupants while minimizing energy usage. The goal is to enhance comfort, safety, security and energy efficiency for the occupants.
LoRa offers long-range coverage and low power consumption, making it ideal for use in smart buildings.
Keywords: Internet of things (IOT); Smart buildings; Lora; Cloud computing
References
Based on the generalization of the theory of generalization of polyhedra, a method for linearizing the yield conditions of an arbitrary form, for plastic shells by inscribed and circumscribed hyperpolyhedra, has been developed. The technique allows one to obtain hyperpolyhedra with an arbitrary number of faces. The linearization technique is used to construct an algorithm and a program for automatic approximation of convex hypersurfaces by inscribed and circumscribed hyperpolyhedra with any number of faces. Automatic linearization of plasticity conditions for rigid-plastic shells made it possible to construct an effective method and PPP for calculating lower estimates for the bearing capacity of shallow shells with rectangular plan. The possibility of increasing the efficiency of programs due to the optimal choice of the number of faces of hyperpolytopes is obtained - polyhedra with a minimum number of faces are found, leading to estimates of the bearing capacity with a given accuracy. On the basis of the kinematic method of the theory of limit equilibrium and the use of the generalized Ioganson's yield condition, a method has been developed for calculating the bearing capacity of plates of a complex shape.
Keywords: Geometric modelling; bearing capacity; yield conditions; approximation; hypersurface; polyhedron; equilibrium; hyperpolyhedron; seismic resistance; design; plasticity theory; design; geometric modelling; rigid plasticity; six-dimensional space; surface; linearization; ultimate load; deformation; optimization; coefficient
References
Aluminum casting is a manufacturing process that has been around for some time. This method has been used in making many aluminum products used as parts of aircraft, automobiles, turbines, and structures like bridges. Every cast product is expected to be of the desired and required strength so it will not fail on application. It is good to predetermine the strength of aluminum cast accurately at the design stage before casting. Therefore, we developed a model that can predict the aluminum cast's strength and other mechanical properties by studying the flow profile of liquid aluminum flowing through the mold.
Keywords: Strength of cast; Aluminum casting; Fluid mechanics; Flow properties
Applications in Various Industries
The versatility of Functionally Graded Materials has sparked interest and found applications in a wide range of industries. In aerospace, FGMs are used in turbine blades, where the gradual variation in material properties helps withstand high temperatures and reduce thermal stresses. In automotive applications, FGMs contribute to lightweight designs while improving structural integrity. The energy sector benefits from FGMs in heat exchangers, which efficiently manage thermal gradients. Biomedical implants take advantage of FGMs' ability to mimic the natural transition between bone and implant materials, promoting better integration and reducing complications. Moreover, FGMs find applications in electronics, where they enable precise control of electrical conductivity and thermal management. Some key application areas where FGMs are used.
Thermal Management Systems
FGMs find use in thermal management systems, such as heat sinks and cooling devices. By tailoring the thermal conductivity gradient within the material, FGMs can efficiently dissipate heat, ensuring optimal thermal performance and minimizing temperature gradients.
Wear-Resistant Coatings
FGMs can be employed as wear-resistant coatings in various industries. By gradually transitioning from a hard and wear-resistant material to a tough and ductile material, FGM coatings can provide superior resistance to wear, abrasion, and erosion.
Optics and Photonics
FGMs play a vital role in optical and photonics applications. They can be designed to possess varying refractive indices, allowing for the controlled manipulation of light and the creation of gradient-index lenses, waveguides, and optical filters.
Energy Storage and Conversion
FGMs have the potential to enhance energy storage and conversion devices. For example, in lithium-ion batteries, FGM electrodes can optimize the transport of ions and electrons, improving the battery's performance and durability.
Structural Components
FGMs offer advantages in structural applications where there are significant thermal or mechanical loads. By gradually adjusting the material properties, FGMs can minimize stress concentrations, reduce thermal expansion mismatches, and enhance the overall structural integrity of components.
Aerospace Propulsion Systems
FGMs have garnered attention in aerospace propulsion systems, particularly in combustion chambers and turbine components. The tailored composition and property gradients can withstand extreme temperatures, resist thermal fatigue, and enhance the efficiency and durability of propulsion systems.
Acoustic Applications
FGMs can be utilized in acoustic devices and systems. By controlling the density and stiffness gradients, FGMs can effectively manipulate sound waves, enabling the design of improved acoustic lenses, sound barriers, and noise-reduction materials.
Microelectronics and MEMS
FGMs find applications in microelectronics and microelectromechanical systems (MEMS). By tailoring the electrical conductivity and thermal properties, FGMs can facilitate better heat dissipation, improved electrical interconnects, and enhanced performance of microdevices.
Corrosion Protection
FGMs can serve as protective coatings in corrosive environments. By gradually transitioning from a corrosion-resistant material to a sacrificial layer, FGM coatings can effectively inhibit corrosion and extend the lifespan of structures and equipment.
These applications highlight the diverse range of industries that benefit from the unique properties and capabilities of Functionally Graded Materials. By tailoring material compositions and properties, FGMs offer opportunities for innovation and optimization, leading to improved performance, efficiency, and reliability in various technological domains.
Design Challenges and Fabrication Techniques
The design and fabrication of Functionally Graded Materials present unique challenges. Achieving the desired composition gradients requires careful consideration of material selection, processing techniques, and modeling approaches. Techniques such as powder metallurgy, thermal spray, sol-gel processes, and additive manufacturing methods like 3D printing offer the means to fabricate FGMs with controlled composition variations. Additive manufacturing, in particular, allows for the creation of complex geometries and precise property gradients, opening up new possibilities for material design.
Future Prospects
Functionally Graded Materials continue to captivate researchers and industry professionals, with ongoing efforts to refine fabrication techniques, characterize properties, and optimize designs. Advances in computational modeling and simulation techniques aid in predicting material behavior and guiding the development of novel FGMs. As additive manufacturing technologies advance, FGMs are poised to revolutionize material design, offering unprecedented opportunities for customization and performance optimization.
Conclusion
Functionally Graded Materials represent a remarkable advancement in material science and engineering. By carefully tailoring composition gradients, FGMs offer unique combinations of properties that can overcome the limitations of homogeneous materials. From aerospace and automotive applications to biomedical implants and electronics, FGMs unlock new possibilities for innovation and performance optimization. As research and development in this field continue to progress, we can anticipate exciting breakthroughs that will shape the future of advanced materials and their applications in various industries.
In the quest for materials with enhanced performance and tailored properties, scientists and engineers have turned to Functionally Graded Materials (FGMs). These engineered materials exhibit a gradual transition in composition, structure, and properties, offering unique opportunities for innovation across numerous industries. By designing materials with specific property gradients, FGMs enable the development of advanced technologies that push the boundaries of traditional homogeneous materials.
Composition and Property Variations
Functionally Graded Materials are characterized by a controlled variation in composition from one end to the other. This composition gradient can include different materials or phases, such as metals, ceramics, polymers, or composites. The gradual change in composition leads to corresponding variations in material properties, including mechanical strength, thermal conductivity, electrical conductivity, magnetic and optical properties. This property tailoring allows FGMs to excel in applications that demand specific material behavior.
This eons-old parallel development could be accepted as casual and based on weak statistics, but any further analysis pushes us to consider a strong participation of creation, and entangled development within that.
Similar thinking can be used to look at fleas and humans, snakes and insects, whales and sardines, monkeys and bananas, etc.
If one brings this into the classroom, it may be that this would be a good introduction to quantum entanglement. The above entanglement would not exist if physics had no part in it.
After all, it is physics that teaches us about energy. But it does not teach us about the energy that exists, for example, when a dog nurses a newborn cat.
In the above examples, it is clear that creation demands a certain proximity to be relevant, like apple trees and humans over eons, but this is something that quantum entanglement has proven to be unnecessary. This suggests that nature has a way to act along certain rules of creation. Can one then expect that there are humans elsewhere in this universe, as long as the conditions of creation are similar? Is love a universal energy we can count on?
This means that we are not giving creation the proper amount of mention and respect. Science and engineering should be seen as extensions of creation.
The easiest example of that entanglement is the emergence of nature, and humans within it. If one observes the most important natural food for humans, fruit was clearly shaped to work with humans. For example, our hands are not optimized for ripping oxen apart but shaped ideally for picking apples from a tree.
If we follow that thinking, we reap major benefits (energy, vitamins, fiber, clean water, etc.). The apple tree, since we are eating its gonads, benefits from our digestive system simply passing its undigested seeds along to Earth, surrounded by nutritious cover.
References
Background: Atrial fibrillation (AF) is the most common cardiac arrhythmia but is currently under-diagnosed since it can be asymptomatic. Early detection of AF could be highly beneficial for the prevention of stroke, which is a major risk associated with AF, with a fivefold increase. The advent of portable monitoring devices can help uncover the underlying dynamics of human health in a way that has not been possible before.
Method: The purpose of this study was to validate the automated analysis of AF by the SmartCardia’s proprietary health monitoring device (ScaAI patch, SmartCardia S.A., Lausanne, Switzerland). To this end, a model was created and tested on three publicly available databases comprised of 243,960 ECG segments of 30-seconds. The model was further tested against a set of 500 ECG streams of 30-seconds (recorded by ScaAI patch - across different clinical trials; annotated by 3 different cardiologists), especially representing problematic conditions, when determination the underlying rhythm was challenging.
Results: The created model obtained F1-scores of 94.42 against a test set from two published available databases, and an F1-score of 92.61 (average F1-scores w.r.t each cardiologist) on the SmartCardia assembled database.
Conclusion: We demonstrated that the new wireless ScalAl patch had a high capacity to automatically detect AF when compared with public database. Further studies will help identify the optimal role of the the ScaAl patch in the management of cardiac arrhythmias.
Keywords: Atrial fibrillation; arrhythmias; automatic arrhythmia detection; wireless system
References
Cloud computing has transformed the technological landscape by providing businesses with scalable virtual resources and by transforming the e-learning industry. With this transformation, however, comes a paramount concern for security. The improvement of e-learning systems requires substantial investments in hardware and software. Cloud computing provides a cost-effective solution for institutions with limited resources. A comprehensive security framework adapted to the specific requirements of the e-learning platform is crucial for maximising the utility of common applications. Skilled security experts and software architects are essential to the design and implementation of such solutions. Authentication, encryption, and access controls serve as the security arsenal's armour and weaponry. The ongoing pursuit of a secure educational environment necessitates a dedicated team that stays current on the most recent threats and countermeasures. Combining cloud computing and e-learning offers numerous opportunities, but security must remain a top priority. This report examines the principles of neural networks and high-performance computing for fortifying cloud-based e-learning platforms, resulting in a tapestry of safeguards that protect the treasures of online education.
Keywords: Cloud computing; Scalable; Authentication; Encryption; Neural Networks
References
Weather forecasting, a crucial and vital process in people's everyday lives, assesses the change taking place in the atmosphere's current state. Big data analytics is the practice of studying big data to uncover hidden patterns and useful information that might produce more beneficial outcomes. Big data is currently a topic of fascination for many facets of society, and the meteorological institute is no exception. Big data analytics will therefore produce better results for weather forecasting and assist forecasters in providing more accurate weather predictions. Several big data techniques and technologies have been proposed to manage and evaluate the enormous volume of weather data from various resources in order to accomplish this goal and to identify beneficial solutions. A smart city is a project that uses computers to process vast amounts of data gathered from sensors, cameras, and other devices in order to manage resources, provide services, and address problems that arise in daily life, such as the weather. Forecasting the weather is a crucial process in daily life because it assesses changes in the atmosphere's current state. A machine learning-based weather forecasting model was proposed in this paper, and it was implemented using 5 classifier algorithms, including the Random Forest classifier, the Decision Tree Algorithm, the Gaussian Naive Bayes model, the Gradient Boosting Classifier, and Artificial Neural Networks. These classifier algorithms were trained using a publicly available dataset. When the model's performance was assessed, the Gradient Boosting Classifier algorithm, which had a plus 98% predicted accuracy, came out on top.
Keywords: Weather forecasting; Big data; Machine Learning; smart city; Gradient Boosting Classifier
References
The atomic structure is presented on the basis of the theory of vortex gravitation. The feasibility and calculation of the values of the density and mass of electromagnetic particles are proposed. A calculation is made, which proves that the photon must have mass. In the calculations, some physical characteristics of electromagnetic particles that are accepted by modern physics are refuted.
Keywords: theory of vortex gravity; cosmology and cosmogony; Celestial mechanics
References
Especially new tendencies in the area of synthetic focus are dynamic in that they exchange the house of expression. Programs based totally on simulated brain are hastily turning into famous in areas such as technology, music, human expression, and science. A new close-up of Christie's Edmund portrait adjustments our modern-day appreciation of the science of synthetic talent and raises questions about the creativeness related with technology. Can craft be seen as resourceful?” Against this background, this lookup makes use of a range of AI functions and modifications the point of view on AI craft and the manufacturing cycle of AI craft to create synthetic intelligence. Show that it is achievable Innovative imagination.
Keywords: Artificial intelligence-generated art; computational creativity; machine learning
.
Since Confined Quantum Field Theory is the extension of the special and general relativity, people may think this is a concern for the people working on astrophysics black hole and so on. But on the contrary its simplicity makes it useable on the everyday modern engineering. All from electrical resistivity to the super conductivity and thermoelectric effect to the crystal growth. In thermoelectric effect we show why two metals must join together and not smelt together to preserve the crystal structure in the both side of the junction. This theory shows how impurities and defect affects performance of the electronic devices in different temperature. Recently in the Crystal Growth and Reproductive Entities article we showed how a bigger crystal eats up a smaller one and the theory was approved by recent experiment.
References
Crystal Growth and Reproductive Entities play a more important role in modern technology and in medical physics.
We will use Confined Quantum Field Theory in Crystal Growth and Reproductive Entities which will give us a more fundamental perspective.
References
In an era marked by rapid transformation and shifting priorities, the need for strategic adaptation has never been more critical. Recent findings emphasize the challenges faced by the German healthcare sector in maintaining public approval. With the healthcare landscape evolving, the industry is increasingly recognizing the importance of user-centered design. This article delves into the pivotal role that prototyping and testing, as integral components of user-centered design, can play in revolutionizing the healthcare domain. It explores how these methods can address the unique considerations of healthcare and highlights their potential benefits and challenges.
Moreover, as digital services reshape the German healthcare industry daily, the landscape is characterized by shorter planning cycles, global competition, and stricter regulatory requirements. In this dynamic environment, traditional project management approaches often struggle to keep pace. This article underscores the transformative power of Agile Project Management, a methodology that has redefined how projects and companies operate. Agile principles, known for their adaptability and value-driven focus, are examined in the context of 21st-century demands, shedding light on how they enhance project outcomes and drive organizational success.
Additionally, the article explores the Agile mindset and its alignment with methodologies like Design Thinking, Scrum, and Objectives and Key Results (OKR). The Agile mindset champions customer collaboration, welcomes change as an opportunity, and emphasizes delivering value promptly. It operates on principles such as customer-centricity, iterative development, continuous feedback, flexibility, adaptability, transparency, and collaboration.
In conclusion, prototyping emerges as a central theme, described as the art of transforming ideas into testable realities. It's defined as the process of creating preliminary versions of products or systems to visualize, evaluate, and refine their design and functionality. The article emphasizes the significance of choosing the right fidelity level for prototypes based on project goals and stages, promoting early exploration and iterative refinement throughout the design process.
Keywords: agile; prototyping; healthcare
References
Chandigarh is a well planned city and one of the modern cities of India. The city is having spacious roads, lush green gardens, good network of Sewerage and Storm water system. The city also owns a well designed Water supply system. Due to the importance of the city, people all across the country has been settled and enjoy facilities of International level. Chandigarh is the only city which earns its 70% of the revenue helping the govt. to plan its funds in darker zones. This level of service has been achieved by the Municipal Corporation Chandigarh by increasing the reliability of the mechanism involved in water supply from the source to disposal of the water. The optimization of the system also has features like 24 Hrs. Water supply in some sectors, 3 times regular supply in routine to all the sectors, villages and slum colonies. The system include pumping the water from Kajauli Water Works (On Bhakhra Canal), Distt. Ropar, Punjab to Sector 39 Chandigarh, Treatment of water, distribution, metering, disposal and further treating the sewage for recycling purpose. This paper reflects the application of “DFR” in Water Supply System of Chandigarh.
Notes
The United Nations Convention on the Use of Electronic Communications in International Contracts (New York, 2005) takes as its point of departure the earlier texts of UNCITRAL to become the first treaty to give legal certainty to electronic contracting in international trade.
More recently, the UNCITRAL Model Law on Electronic Transferable Documents (2017) applies the same principles to enable and facilitate the use of transferable documents and securities in electronic form, such as bills of lading, bills of exchange, checks, promissory notes and warehouse receipts.
In 2019, UNCITRAL approved the publication of the Notes on key issues related to cloud computing contracts, while continuing to work on the development of a new instrument on the use and cross-border recognition of electronic identity management services and authentication services (trust services)”.
Peer-to-peer technologies, author’s rights and copyright. Bogotá: Editorial Universidad del Rosario, 2009.
References
Regulations
The present work, explores from the nature of Civil Law and the source theory of legal obligation, towards the evolution of development of Smarts Contracts and Blockchain Technology in the implementation of a system of RC20 and RC21 token issuing machine, with fiduciary support as a mechanism of legal security in order to propose a model of potentialization of destinations with high tourism demand in developing countries, within the framework of design, implementation and commissioning of the destination Hotel Aiden by Best Western Quito DM, Republic of Ecuador City and Beach.
Keyworks: Smart Constracts; Smart Tourism; Tokenization; Blockchain; contractware
References
The objective of this paper is to share the details related to the standard Delivery Attempt Verification Techniques for accurately determining if the Delivery Attempt made by the Delivery Rider is valid or not against the Order or Parcel ordered Online by the Customer through Shipper E-commerce Site. The Researchers who are working on this research area are using ‘Qualitative Research.’ The Data for the research was taken from the Delivery Mobile Applications, Blogs, social media, and E-commerce Sites, and then it was presented descriptively. The study in this paper tells us that the formation of the Standard Regulation will improve the visibility of Delivery Attempts made on the logistics operations. In Order to achieve it, several steps need to be taken for the successful implementation of the Regulation. This paper will only be limited to the extent to which the challenge currently facing is clearly described, then suggest possible solutions like Live Tracking, Lat-long Capture, Delivery Code, Verification Calls or SMS, etc., for resolving it and how it will work, and finally, what positive impact the solution will have on the Customer and the logistics sector.
Keywords: First Mile; Mid Mile; Last Mile; Logistics; E-Commerce; COD (Cash on Delivery)
At the forefront of biomedical engineering are innovative medical devices, each meticulously crafted with the intention of transforming patient care, alleviating pain and suffering, and extending human lifespans. From magnetic resonance imaging (MRI) machines to pacemakers, artificial organs to state-of-the-art prosthetic limbs, these life-saving devices are a testament to the profound impact of biomedical engineering on healthcare.
Beyond these visible marvels, biomedical engineers delve into the intricate realm of biomechanics and rehabilitation. They design orthopedic implants, rehabilitation equipment, and assistive devices that empower individuals to regain mobility and independence after injuries or surgeries. This field represents not just physical healing but also the restoration of hope and human dignity.
Imagine a world where damaged organs can be repaired or replaced without the need for organ donors. Biomedical engineers are propelling us toward this extraordinary future through groundbreaking work in tissue engineering and regenerative medicine. Their efforts hold the promise of growing human organs in the laboratory, offering hope to countless patients awaiting life-saving transplants.
Effective drug delivery is another vital aspect of biomedical engineering. It can mean the difference between a successful treatment and an ineffective one. Biomedical engineers engineer drug delivery systems that ensure medications reach their intended targets with precision, minimizing side effects and maximizing therapeutic benefits.
The field also plays a pivotal role in the development of diagnostic tools and medical imaging technologies that have revolutionized healthcare. Computed tomography (CT) scans, positron emission tomography (PET) scans, and genetic testing have empowered physicians to detect diseases at earlier stages, resulting in more effective treatments and improved patient outcomes.
In the age of artificial intelligence, biomedical engineers are harnessing the power of machine learning and data analytics. They create tools capable of analyzing vast amounts of medical data, assisting with diagnostics, and even predicting disease outbreaks. These innovations have the potential to revolutionize medical decision-making and improve patient care on an unprecedented scale.
Moreover, biomedical engineers are making a global impact by crafting low-cost, easily maintainable medical devices for resource-constrained regions. These innovations extend modern healthcare to underserved communities worldwide, bridging the gap between medical technology and accessible healthcare.
Biomedical engineering is not merely about machines and gadgets; it's about elevating the human condition. As technology continues to advance, the horizons of innovation within this field remain boundless. Biomedical engineers tirelessly address complex health challenges, propelling us toward a future where healthcare is more effective, accessible, and patient-centric than ever before. Their work is nothing short of remarkable, and its potential to positively impact our lives is immeasurable.
As we navigate the challenges of the 21st century, engineers stand at the forefront of innovation and progress. Their work not only shapes our daily lives but also holds the key to a sustainable and prosperous future. One such field of engineering is Biomedical engineering. It is a dynamic and rapidly advancing field, sitting at the captivating crossroads of medicine, biology, and engineering. These dedicated professionals serve as the vanguards of healthcare innovation, leveraging their deep knowledge of engineering, biology, and medicine to bring about groundbreaking advancements that not only enhance patient outcomes but also revolutionize our entire approach to healthcare challenges. Its primary mission is to elevate healthcare standards, enhance patients' quality of life, and, most importantly, save lives. This remarkable discipline achieves these goals through the ingenious application of cutting-edge technology and profound scientific insights.
References
The field of artificial intelligence (AI) in healthcare is rapidly expanding worldwide, with successful clinical applications in orthopedic disease analysis and multidisciplinary practice. Computer vision-assisted image analysis has several U.S. Food and Drug Administration-approved uses. Recent techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, and natural language processing to infer health trajectories from medical notes. Advanced clinical decision support systems that combine genomics and clinomics are also gaining popularity. Machine/deep learning devices have proliferated, especially for data mining and image analysis, but pose significant challenges to the utility of AI in clinical applications. Legal and ethical questions inevitably arise. This paper proposes a training bias model and training principles to address potential harm to patients and adverse effects on society caused by AI.
References
This study aimed at investigating the role of social facilitation on performance. This study was done on 28 university students at Psychology department Karachi University. A test was conducted through Speed and Accuracy Cancellation Sheet individually and within a group. The result for this study was a mean average of 50.036 when working alone whereas 46.714 when working in group. It was found that the effect of social facilitation on performances was false. There can be different reasons/factors that can affect social facilitation, which can be fear of audience evaluation, opposite gender audience, bad mood, an inner drive for performance or distraction/ conflict within oneself. Even when performing a seemingly simple task, fear of public shame can be a reason for harmful social facilitation of performance. A drop in performance is just one of the potential adverse outcomes of extreme fear.
Keywords: Social Facilitation; Speed and Accuracy; Performance
References
It would be an understatement to say that the internet is hazardous in this age of constantly evolving attack mechanisms and pervasive data thefts. For security specialists, it is akin to engaging in an endless game of cat-and-mouse as they traverse an ever-changing landscape. Using only firewalls and antivirus software against a modern, well-equipped army is equivalent to using spears and stones. Social engineering or malware employing packing or encoding techniques that evade our detection tools are all that an adversary needs to compromise our system. Therefore, it is imperative to transcend the limitations of edge defence, which primarily focuses on tool validation, and adopt a proactive strategy that emphasises intrusion identification and prompt response. This can be accomplished through the implementation of an ethereal network, a comprehensive end-to-end host and network approach that not only scales effectively but also ensures accurate intrusion detection. Our objective is not limited to mere obstruction; it also includes a substantial reduction in time. When conventional security measures, such as firewalls and antivirus software, fail, we must swiftly ascertain the nature of the incident and respond accordingly. In industry reports, response times are frequently measured in weeks, months, or even years, which is untenable. Our objective is to reduce this timeframe to hours, a significant reduction that will improve our response capabilities. Therefore, an effective approach to breach detection becomes essential. Together with a robust honeypot system, we employ a Modern Honey Network (MHN) to facilitate honeypot management and deployment while ensuring their security. This fusion includes honeypots such as Glastopf, Dionaea, and Kippo, which document suspicious activities and capture crucial details of the attacks on the MHN server. In addition, reconnaissance is essential to our research. Recognising the complexities of reconnaissance, we make it the focal point of our efforts. When malware or insider threats penetrate our network, they frequently conduct reconnaissance to determine the extent of their access. By closely observing this type of activity, we can readily identify any suspicious network intrusions or compromised Internet of Things devices. Our deployment strategy concludes with the installation of MHN, the deployment of Dionaea, Kippo, and Snort honeypots, and their integration with Splunk for effective analysis of captured attacks. This integration enables us to identify the specific service ports under attack and trace the assailants’ source IP addresses, providing invaluable information for further investigation and mitigation.
Keywords: Breach; Ethereal; Intrusion Detection Systems; Honeypot; Reconnaissance
References
Digital transformation and upgrading pose new requirements for talent quality. Taking human resources positions as an example, this article constructs a "3+X" talent quality evaluation index system that includes knowledge, ability, attitude, and professional skills. Based on the data characteristics of 55 employees in human resources positions, a combination weighting model, entropy method, and BP neural network model are used, and an evaluation index system and weight evaluation index system suitable for the quality characteristics of talents in China's hydropower industry have been constructed. The research results show that employee competence is an important indicator for driving digital transformation, with knowledge and professional skills playing a relatively central role, and attitudes being less differentiated. Finally, the research significance and shortcomings of the study are summarized, and the prospects for future development are provided.
Keywords: Digital transformation; Talent quality characteristics; Evaluation index; BP neural network; Entropy method
References
In recent years, digital twins have become a more significant strategic trend in the construction industry. Stakeholders in the industry view it as a technology-driven innovation that has the potential to support the design, building, and operation of constructed assets, alongside advancements in other new-generation information technologies such as the Internet of Things (IoT), artificial intelligence (AI), big data, cloud computing, and edge computing. However, the construction project context generates various organizational and functional information through model-based domain-specific information models that require integration and analysis. Furthermore, commercial technologies enable the integration of real-time data sources with building information models (BIM), but these tools are often proprietary and incompatible with other applications. This lack of interoperability among heterogeneous data formats is a major obstacle to the reliable application of digital twins in the construction industry. To address this challenge, this study presents a multimodel framework developed using Information Container for Linked Document Delivery (ICDD) that can integrate multiple data models from autonomous and heterogeneous sources, including real-time data sources, in their original format at the system level. This framework enables stakeholders to analyze, exchange, and share linked information among the built asset stakeholders, relying on linked data and Semantic Web technologies.
Keywords: BIM; Multimodel; Digital Twin; Linked data; Information Containers
References
Italy has a rich world cultural heritage, and its protection strategy has also been developed. It has experienced value changes that emphasize "restore the original shape of the building", "historical authenticity", and "integrity protection". The aesthetics presented here are also Changes have taken place, highlighting the "beauty of historical truth and the beauty of differences of contemporary imprints."
Keywords: architectural cultural heritage; protection strategy; aesthetic development; gestalt and authenticity
A well-known classification algorithm we often hear about and use in solving several problems is the k-nearest Neighbor (KNN) Classifier algorithm, which can also be used in regression processes such as the KNN regression algorithm. Also, the Naïve Bayes Classifier algorithm, including the Artificial Neural Network (ANN) algorithm, where the Neural Network (NN) algorithm can also carry out the regression process. Furthermore, the Support Vector Machine (SVM) algorithm can also conduct the regression process with the SVM regression algorithm and the Decision Tree Classifier (DT) algorithm, including the regression process with the DT regression algorithm. Apart from that, the random forest classifier algorithm can also be used for classification, and the regression process is carried out with the RF regression algorithm. Furthermore, the classification process can also be carried out using the Generalized Regression NN (GRNN) algorithm as a variation of the Radial Basis NN (RBNN) algorithm, where GRNN can be used for classification or regression. Finally, the classification process can also use the algorithm. The gradient-boosted tree classifier can also be chosen to carry out the classification. The regression process can also be done using the Gradient gradient-boosting machine-regression algorithm. Furthermore, the classification process can be carried out with multilayer perceptron Classifier algorithms, One-vs-Rest (aka One-vs-All) Classifier algorithms, and factorisation machine Classifier algorithms.
In addition, the classification process can be used to classify with two output results, namely the binary classification algorithm. The following classification algorithms can be used for binary classification and regression processes: the Logistic Regression algorithm, LightGBM algorithm, XGBoost algorithm and Neural Network (NN) algorithm (Deep Learning). Apart from that, the following binary classification algorithms are only specific for classifying with two output results. They are the Naive Bayes (Gaussian) algorithm, the Naive Bayes (Bernoulli) algorithm, the K Nearest Neighbors (KNN) algorithm, the Support Vector Machine (SVM) algorithm, the Decision Tree (DT) algorithm, Random Forest (RF) algorithm and Gradient Boosting Machine algorithm.
A classification model can be measured using a confusion matrix, a Matrix for Summarizing the performance of the Classification algorithm and is known as an error matrix. A confusion matrix is a matrix between the predicted and actual conditions in the population. The predicted condition has PP as the number of positively predicted cases in the population, whilst PN is the number of negatively predicted cases in the population. Meanwhile, the actual condition has P as the number of positive actual cases in the population and N as the number of negative actual cases. The confusion matrix has 4 scores: TP as True Positive, TN as True Negative, FN as False Negative and FP as False Positive. TP is a test result that correctly indicates the presence of a condition or characteristic, whilst TN is a test result that accurately displays the absence of a condition or characteristic. Meanwhile, FN is a test result that wrongly indicates that a particular condition or attribute is absent, and FP is a result that wrongly suggests that a specific condition or attribute is present.
The following are metrics that can be used in measuring classification models, and they are:
This metric can only be applied to the classification process. It cannot possibly be used for other supervised methods, such as regression, and is even less applied to clustering processes as unsupervised models.
When dealing with supervised models containing classification and regression, we need special matrices, such as confusion matrices. In software engineering, the final step, such as testing, is the most crucial stage in software creation, and that is the confusion matrix. We must have software that has been tested, and in this case, it is called white box testing, which is testing carried out from the internal side of the software. It is essential that software is not only created but can also be tested internally and even better if it is also tested externally, which is also often referred to as black box testing. Black box testing is a software testing process carried out by distributing questionnaires to software users as a user acceptance test.
References
Federated learning (FL) is a distributed machine learning technique that enables remote devices to share their local models without sharing their data. While this system benefits security, it still has many vulnerabilities. In this work, we propose a new aggregation system that mitigates some of these vulnerabilities. Our aggregation framework is based on: Connecting with each client individually, calculating clients’ model changes that will affect the global model, and finally preventing aggregation of any client model until the accepted range of distances with other clients is calculated and the range of this client is within it. This approach aims to mitigate against Causative, Byzantine, and Membership Inference attacks. It has achieved an accuracy of over 90 percent for detecting malicious agents and removing them.
Keywords: Federated Learning; Security; Step - wise Model Aggregation
References
Seawater desalination is an alternative that can extend water supplies beyond what is available in the hydrological cycle, with a constant and climate-independent supply. Radical transformation in the way we use natural resources is central to meeting the needs of future generations. The growing desalination market across the world has thrown up the challenges of management of Brine and EoL (End-of-Life) membranes. For achieving sustainable desalination strategy, the current economy based on linear model has to be replaced with circular economy model through suitable technologies. Here in our study, we have carried out various process studies for brine management and membrane management for facilitating the value addition to desalination plants and facilitating the reuse and recycle for lower end applications. The process technologies based on these studies are presented in this engineering article. The coupling of recovery of technology trace metals from brine using Radiation Induced Grafted sorbents is highlighted. These process studies and technologies will help to incorporate principles of the circular economy for the sustainable development of desalination.
Keywords: Desalination; Brine management; Membrane management; Circular economy
References
Permafrost is the subject of global research, it occupies about 25% of the entire land area of the globe, and also its degradation. Many works of international researchers reflect the results of work on permafrost soils common in North America, Canada, Europe, Asia, and naturally in the Arctic and Antarctica. As a result of the degradation of permafrost all over the world, including in Mongolia, permafrost of mainly discontinuous and island types is thawing, since here the thickness of frozen soils is from 2.0 ... 4.0 meters to several tens of meters, as a result of the last few decades, complete thawing is possible most island permafrost. From the point of view of permafrost engineering, thawing and degradation of permafrost leads to a decrease in the bearing capacity of the base soil, the latter leading to a loss of stability of buildings and engineering structures with possible catastrophic consequences.
Keywords: geocryology; thawing; methane release; active layer; mechanical properties of frozen soils
.
To attract every farmer to do agriculture business digitally and smart agriculture to get more productivity without soil or land. There is dire need of Internet of Things (IoT) devices in Hydro- culture Farming sector in India., where as agriculture is backbone of the country. Indian Agriculture is facing several problems such as small and fragmented land holdings, fertilizers, pesticides, chemicals used for agriculture. The solution is hydroponic system. Hydroponics is a method in which plant grow without using soil and gives you more production than soil farming within less time. However, purity of water control by this system which can result to growing of plant. Microcontroller automatically maintains purity level in water solution using turbidity sensor. We have to develop and implement Internet of Things (IoT) devices to solve above difficulties. Hydroponic system does not required land so its land free farming, Human being demanding quality food which is free from chemicals and pesticides Here we can go organic ,it should done in controlling environment .you also can do it at garden, balcony or in some controlling area. Such research paper is basic study of Internet of Things (IoT) devices used in Hydroponic System and its Impact on Productivity of Agricultural Sector upcoming as a Hydro culture Farming.
References
AR applications have become an integral part of our daily life as a form of communication, for entertainment, shopping, travel, education, medicine, robotics, manufacturing, etc. [1, 2]. The development of simulation applications has recently become quite common. The development and management of applications with an interactive environment and the interaction of better functions, updates such as the use of augmented reality technologies such as AR and VR [1], recognition methods [3, 4] to improve user interaction play an important role in the current development trends.
The introduction of AR and VR in educational institutions and digital transformation in industrial and non-industrial areas has grown exponentially in recent years [5]. The article [5] presents the application of augmented reality technology in the field of engineering.
For example, in medicine, VR is used as a field for training future doctors, especially surgeons, which allows you to get not only theoretical knowledge, but also real practical experience from training, while eliminating the risk of harming a real patient [6]. In addition, it provides an opportunity to analyze rare diseases and acquire skills to deal with them in emergency situations. In addition, R can help in the rehabilitation of patients after severe injuries. In science, virtual reality technology can be used to simulate complex systems and processes, allowing researchers to study them in detail and make new discoveries.
In education, VR has long been used to conduct interactive lessons and create unique simulators that help accelerate the progress of learning the material and contribute to the acquisition of practical skills [7, 8].
This paper provides experiences and suggestions for the practice-oriented development and deployment of augmented reality technology in various fields that can be used for future augmented reality research.
Keywords: Augmented reality (AR); Virtual reality (VR); Web AR; image; configuration
Recent developments in composite material research focus on improving performance, cost-effectiveness, and sustainability. Innovations in fiber technology, matrix materials, and manufacturing processes enable the deployment of composites with enhanced properties. The integration of nanomaterials and advanced manufacturing techniques, such as automated fiber placement and 3D printing, has further expanded the design possibilities and applications of composite materials [2].
In static testing, advancements have been done in different fields [3]. Digital image correlation (DIC) has been used to capture deformations and crack growth. Integration of acoustic emission testing with machine learning algorithms has improved the identification of specific failure modes. Advanced thermographic techniques, such as lock-in thermography and pulsed thermography, were tailored to offer improved sensitivity and resolution to detect subtle defects. In dynamic testing, the incorporation of servo-hydraulic testing systems with high-speed cameras was developed to facilitate a more accurate representation of real-world loading conditions. The integration of instrumented impactors with advanced sensors in drop weight impact testing was set to provide detailed data on impact energy, force, and deformation. Implementation of shearography in dynamic testing made possible the real-time monitoring of damage evolution during impact. Other developments include the use of DMA in conjunction with other testing methods, such as rheometry and spectroscopy, to obtain a more comprehensive understanding of dynamic behavior.
Numerical modeling plays a fundamental role in understanding and predicting the behavior of composite materials, complementing experimental testing. Finite Element Analysis (FEA) is a widely used numerical technique that simulates the response of composite structures under various loading conditions. FEA allows researchers and engineers to assess stress distribution, deformation, and failure modes, providing deep understanding into the material's performance. Recent advancements in numerical modeling include the incorporation of multiscale modeling, allowing for a more accurate representation of the complex interactions between fibers and the matrix at different length scales. Computational tools based on machine learning algorithms are also emerging as powerful tools to predict the material properties, optimize designs, and reduce the time and cost associated with traditional trial-and-error approaches [4].
References
Fiber-reinforced composites are engineered materials that derive their enhanced properties from the combination of the fibers and matrix. The integration of carbon, glass, or aramid fibers with a matrix material forms a composite that exhibits superior mechanical properties compared to traditional materials. Actually, the fibers provide the composite with high strength and stiffness, while the matrix material, often of polymeric, metallic, or ceramic nature, binds the fibers together and ensure load transfer. This fiber/matrix synergistic combination results in a material with a superior strength- and stiffness-to-weight ratios and improved mechanical performance. The extensive utilization of these materials in different industries justifies the need for continuous advancements in this topic to reduce costs and assure design confidence, aiming to make these materials applicable in a larger number of applications [1]. In the aerospace industry, composite materials are extensively used in aircraft structures, reducing weight and fuel consumption while maintaining structural integrity. The automotive sector benefits from composites in the form of lightweight components, enhancing fuel efficiency and overall performance. In the construction industry, composite materials contribute to the development of durable and corrosion-resistant structures, as primary structural elements or for reinforcement of existing structures. Sports equipment, such as tennis rackets and golf clubs, capitalizes on the exceptional strength and design flexibility offered by composite materials.
References
Accurately detecting and classifying defects in wafers is a crucial aspect of semiconductor manufacturing. This process provides useful insights for identifying the root causes of defects and implementing quality management and yield improvement strategies. The traditional approach to classifying wafer defects involves manual inspection by experienced engineers using computer-aided tools. However, this process can be time-consuming and less accurate. As a result, there has been increasing interest in using deep learning approaches to automate the detection of wafer defects, which can improve the accuracy of the detection process.
Keywords: Wafer detection; Deep learning; Object detection; Classification
References
Urban Computing (UC) stands as an interdisciplinary field where urban challenges are examined and, where applicable, addressed through cutting-edge computing technologies. The swift pace of urbanization has brought about significant improvements in many aspects of people's lives, but it has also given rise to substantial challenges like traffic congestion, energy consumption, pollution, soil artificialization, and heat islands. In response, Urban Computing seeks to confront these issues by leveraging the data generated in cities, facilitated by urban sensing, data management, data analytics, and service provision. This iterative process aims for unobtrusive and continuous enhancements in the quality of life, city operations, and environmental conditions. This paper introduces a comprehensive framework tailored for Urban Computing, specifically attuned to the requirements of 3D geosimulation and informed prospective analysis. Given the dynamic evolution of urban environments, the demand for sophisticated computational tools has become increasingly imperative. The proposed framework integrates cutting-edge technologies to address the intricacies associated with urban dynamics, providing a foundational basis for well-informed decision-making. Encompassing components for data acquisition, processing, modeling, simulation, and analysis, the framework underscores the synergy among these elements, promoting a holistic understanding of urban phenomena.
Keywords: Urban Sensing; Urban Data Analytics; Explainability, Smart Cities; Sustainable Development Goals; Machine Learning
References
The newly discovered general law on the compressibility of gas-containing fluid is used to calculate the density of multiphase clayey soil in a closed system under high pressure in the field of construction, taking into account the parameters of water saturation and dissolution of gases containing in thick layers of clayey soils of the foundations of high-rise buildings, the clay core of hydraulic dams and the environment of underground structures will allow you to perform engineering calculations of the foundations of buildings and structures with high accuracy.
Keywords: foundation soil; pore pressure; compressibility coefficient; Henry's law; Pauson's law
References
Digital twins have become indispensable tools in urban planning, providing dynamic and interactive portrayals of urban landscapes. This paper presents an innovative perspective on urban digital twin design, placing a pronounced emphasis on a multi-scalar framework that captures the intricate dynamics of urban systems across macro, meso, and micro scales, enriched by the concept of varying levels of detail. The proposed architectural model is rooted in OpenUSD standards, harnessing the Universal Scene Description format to amplify interoperability, facilitate seamless data exchange, and enable nuanced capture of urban visual features. Our exhaustive methodology addresses the constraints observed in existing urban digital twin frameworks and showcases the effectiveness of our approach through practical implementation in real-world urban settings. The outcomes underscore the critical role of multi-scalar representation and the integration of OpenUSD standards in propelling the capabilities of urban digital twins, thereby fostering more enlightened and responsive urban planning.
Keywords: Urban digital twin; Multi-Scale representation; OpenUSD standards; Urban planning; Level of detail; Urban visual features
References
An analysis of spatial distributions of West African rainfall during monsoon period (JJAS) of six ensemble members from Coupled Intercomparison Model Project Phase 5 and 6 (CMIP5 and CMIP6) and compared to two observational datasets of Global Precipitation Climatologic Center and Climate Hazard Group InfraRed Precipitation with Station (GPCC and CHIRPS) using six extremes precipitation indices from Expert Team on Climate Change Detection and Indices (ETCCDI). The annual cycle of indices based on daily rainfall such as consecutive dry (CDD) and wet (CWD) days, was used over the Sahelian, Savannah and Guinean regions with satellite daily precipitation estimates. The root mean square error (RSME) and standard deviation were compared using a Taylor diagram for each subregion over West Africa. A higher positive correlation is found between CMIP6 and the reference dataset. Despite the high uncertainties, a strong correlation was found over the Savannah region between the GPCC and model simulations with extreme precipitation events (EPEs). This indicates that CMIP6 reproduces the rainfall pattern over the areas better than its CMIP5 counterpart.
Keywords: West Africa; rainfall; CMIP5 and CMIP6 models; climate change
.
The automotive industry faces many challenges caused by AI and is pursuing more ecological solutions. We can see a significant increase in articles announcing the implementation of the latest trends in the automotive industry. Buzzwords, including Artificial Intelligence, Machine Learning, Big Data, and blockchain, stand out among others. Although very promising, these technologies only sometimes match the needs of the automotive market, although they may have (narrow) applications. Quantum computing represents a revolution to transform the IT industry and extend to the automotive sector, posing a significant threat to automotive security in the coming decades.
On the other hand different renewable/non conventional sources are solar energy(energy derived from sun’s radiation falling on earth’s surface), wind power(energy generated using air’s velocity), tidal energy(energy generated by using different level of water during high and low tides), hydropower(energy generated by movement of water from higher to lower heights), ocean energy(energy generated by using waves and currents of sea water), bioenergy (energy derived from decomposition of organic materials, called biomass), geothermal energy(energy derived from below of earth’s surface which got trapped during formation of earth and with radioactive decay) and lastly by OTEC(ocean thermal energy conversion)(energy derived by using temperature gradient between surface and deep ocean waters).
The advantages of renewable sources are that they can be renewed in nature in short time with unlimited supply; less polluting to environment due to less carbon emissions; and low cost.
At present share of renewable energy in India is 42.5% (as on February 2023) [1]. In the world’s perspective it is 29% of electricity generation from renewable (as on 2020) [2]. However it will need lot of good innovations for utilizing renewable sources upto maximum limits.
In today’s world energy consumption is increasing at an alarming rate with rapid developments in any society or places. From the industrial revolution energy production has made rapid developments by utilizing conventional sources like coal, petroleum, natural gas, nuclear energy etc in different equipments/power plants. However these conventional sources have drawbacks like not being able to replace after consumption with having limited stocks and millions of years to generate it. They emit high carbonaceous element after use and as a result not environment friendly. Also costs of conventional resources are high. Only benefits of using conventional sources are that they need less land requirements and maintenance costs compared to renewable/non conventional sources.
References
Modernization of the agricultural sector is based on the transition to “smart agriculture”. The intellectualization of agricultural technology management is of greatest interest to science and practice. At the same time, expert systems in which control decisions are made through knowledge bases (KB) are most effective. In this work, knowledge bases are formed using analytical control systems located in data processing centers. Such knowledge bases are transferred to local consumers, who make local control decisions based on them. The purpose of this work is to develop a theoretical basis for solving the problem of intelligent management of the state of agrocenoses containing crops of main crops and weeds. Solving this problem aims to address the limitations of the current paradigm of separate crop and weed management. The application of mineral fertilizers simultaneously stimulates the growth and development of agricultural plants and weeds, and treatment with herbicides simultaneously suppresses the growth of both agricultural plants and weeds. As a result, this leads to significant crop losses and excessive consumption of fertilizers and herbicides. In the presented work, for the first time, the problem of managing agrocenoses is raised and solved at the program level, implemented during one growing season. At this level of management, programs are formed that represent a sequence of technological operations for the application of mineral fertilizers, irrigation and herbicide treatments, ensuring the achievement of a given crop yield. To solve this problem, the previously developed theory modified mathematical models of the state of cultivated crops, reflecting the influence of herbicides. In addition, a model of the state parameters of the dominant weed species was introduced into the control problem, which, in addition to the doses of herbicide treatments, also reflects the influence of mineral fertilizers. The problem is solved using the example of sowing spring wheat as part of agrocenoses.
Keywords: agrocenoses; program control; intelligent expert systems; mathematical models; algorithms
References
Predicting pregnancy and live births using machine learning in the field of in-vitro fertilization (IVF) has long posed a significant challenge due to the difficulty in achieving consistent performance across various studies. In this paper, we conduct a comprehensive review and analysis of the existing limitations in current research. Additionally, we introduce a standardized machine learning pipeline, which serves as a valuable guide for future researchers. Furthermore, we propose two alternative modeling approaches: phase-by-phase modeling and subgroup FMLR modeling. These two alternatives not only enhance prediction performance but also offer clinically sensible explanations and timely guidance for users. Most notably, they shed light on the complexities of the IVF cycle, highlighting when, who, and where machine learning tasks face their greatest challenges. This insight can inspire future efforts in data collection and patient engagement processes.
Keywords: In-vitro fertilization; Machine Learning; Explainable AI; Medical AI
References
Combustion closed-loop control is an important technology for intelligent energy saving and emission reduction of internal combustion engines. The real-time feedback of combustion indicators plays an important role in the accuracy and rapidity of closed-loop control. However, the calculation of the combustion midpoint based on the complete heat release rate curve often consumes more computing resources. In order to speed up the calculation speed, this paper proposed a method that the Wiebe model combined with neural network prediction combustion metric. Firstly, we match the Wiebe basis function for different working conditions by analyzing the heat release rate curve. then the RLS-DE algorithm is developed to identify the heat release rate curve with high precision, and the BP neural network combined with the Wiebe model parameters is used to calculate CA50. Finally, in the HIL real-time simulation environment, the calculation accuracy and calculation speed of the algorithm are verified. The results show that the use of different Wiebe basis functions combined with the RLS-DE algorithm can fit the heat release rate curves under different working conditions with high precision, and the fitting error is within 5%. The CA50 prediction algorithm based on the parameters of the Wiebe model has a different calculation accuracy under different loads. The algorithm error is 6%-8% at low load, and the error is 2%-4% under high load conditions. It is developed in the cRIO-9047 real-time computing platform. The algorithm time-consuming is 8-12 us, which has high real-time performance and engineering application value.
References
Machine learning is widely utilized across various scientific disciplines, with algorithms and data playing critical roles in the learning process. Proper analysis and reduction of data are crucial for achieving accurate results. In this study, our focus was on predicting the correlation between cigarette smoking and the likelihood of diabetes. We employed the Naive Bayes classifier algorithm on the Diabetes prediction dataset and conducted additional experiments using the k-NN classifier. To handle the large dataset, several adjustments were made to ensure smooth learning and satisfactory outcomes. This article presents the stages of data analysis and preparation, the classifier algorithm, and key implementation steps. Emphasis was placed on graph interpretation. The summary includes a comparison of classifiers, along with standard deviation and standard error metrics.
Keywords: Machine Learning; Naive Bayes classifier; k-NN; Diabetes prediction dataset
References
This editorial examines the transformative role of Artificial Intelligence (AI) in enhancing cognitive accessibility for neurodiverse individuals. It explores the evolution from conventional assistive technologies to sophisticated AI-driven solutions, highlighting how these advancements are reshaping inclusivity in education and the workplace. The piece critically analyzes the benefits and challenges of AI in this context, considering ethical implications, user-centered design, and the need for equitable access. It concludes with a call to action for continued innovation and collaboration in developing AI technologies that truly cater to the diverse needs of neurodiverse individuals.
Keywords: Artificial Intelligence in Education; Cognitive Accessibility; Neurodiversity in Learning; AI Ethical Considerations; Inclusive Educational Technology
In recent years, the automotive industry has witnessed major advancements in automation systems, which revolutionized the manufacturing processes and improved vehicle production:
References
The evolution of industrial processes through automation has led to increased efficiency, precision, and adaptability. Incorporating automation and robotics into manufacturing and production has been a key driver of progress in various industries [1]. The capacity to enhance quality and repeatability, mitigate human error, and accelerate production rates has rendered these concepts indispensable across diverse sectors. Furthermore, the significance of automation is further emphasized in the era of Industry 4.0, where smart manufacturing and mechatronics play fundamental roles [2]. Automation encompasses the utilization of diverse control and sensor systems alongside actuators to control machinery, thereby diminishing the necessity for human intervention. Automation is widespread across various industries, with the automotive sector emerging as a main driver for the advancement of automation systems, by pursuing high productivity and enhanced product flexibility [3, 4]. A notable advancement in automation is the integration of Industry 4.0 principles, by the combination of digital technologies with industrial processes [5]. As a result, different principles have been adopted, such as Internet of Things (IoT), artificial intelligence (AI), and big data. The communication and decision-making abilities of machines leads to the concept of "smart factory" [6], which excels in production line efficiency, productivity, and fewer stoppages. Using real-time system monitoring and optimization, businesses can reduce material waste and power requirements, while enhancing product quality.
References
Consider conversing with a Chabot who has a nearly human-like personality. That is precisely what OpenAI ChatGPT offers. With over a million members in just five days of its debut, Chat GPT has become a major player in the tech and internet industries. Chat GPT, the brainchild of Open AI, is poised for tremendous growth and market expansion, along with all other innovations. It is a great tool for producing outstanding work regardless of skill level because of its speedy generation of unique output. This study is to determine the many uses of ChatGPT in the fields of business, healthcare, and education; assess ChatGPT's capacity to protect user security and privacy; and investigate ChatGPT's potential for future research in these areas. The researchers examined numerous articles to assess the aforementioned aims and came to their conclusion. The researchers emphasize how useful ChatGPT may be in different domains, such as Business, Education, and Healthcare. Despite its potential, ChatGPT presents several ethical and privacy issues, which are thoroughly examined in this work.
Keywords: Artificial Intelligence; ChatGPT; Deep Learning; Chatbot; Jailbreaking
References
(Our aquatic ecosystems' health depends on the quality of the water. Although continuous water quality monitoring at high temporal and geographical resolution is still prohibitively expensive, it is a crucial tool for watershed management authorities since it provides real-time data for environmental protection and locating the sources of pollution. A reasonably priced wireless system for monitoring aquatic ecosystems will make it possible to gather data on water quality efficiently and affordably, helping watershed managers to preserve the health of aquatic ecosystems. A low-cost wireless water physiochemistry sensing system is introduced in this research. This system has been proposed to measure water quality (pH, salinity, and turbidity) in stations or at home using the Internet of Things system and by controlling the system using the Arduino Microcontroller and using special sensors for this purpose. The findings show that a trustworthy monitoring system may be developed with the right calibration. Catchment managers will be able to sustain this surveillance for a longer period and continually check the quality of the water at a greater spatial resolution than was previously possible. The system has been tested in more than one location and the results have proven its success of this system.).
Keywords: ESP8266 Controller; LCD Display; Water Quality Monitoring; Salinity; Temperature; and Turbidity Sensor
References
There is an increased demand for individual authentication and advanced security methods with the technology advancement in all fields. The traditional methods using passwords etc. are prone to proxies. The Electrocardiogram (ECG) and Photoplethysmogram (PPG) can be used as a signature for biometric authentication systems because of their specificity, uniqueness, and unidimensional nature. In this work, ECG and PPG Based Biometric Identification Systems using Machine learning is proposed. This work provides an end-to-end architecture to offer biometric authentication using ECG and PPG biosensors through Support Vector Machine.
Keywords: ECG; PPG; Biometric Authentication; SVM and Arduino
References
Innovative coded excitation techniques have been proposed to increase the signal-to-noise ratio (SNR) of ultrasound signals, which are significantly attenuated by scattering and absorption. Among the methods applied, the linear-frequency modulation signal, commonly defined as chirp signal, has been studied to provide images with greater depth, even in high attenuation media, maintaining the spatial resolution found in conventional excitation systems. This article presents a graphical user interface (GUI) based on Matlab to simulate short-duration conventional excitation (CE) pulses and long-duration chirp-coded excitation (CCE) pulses. The GUI allows the selection of apodization window, center frequency, and pulse duration parameters. In addition, it is possible to configure the bandwidth of the chirp signal. Pulse evaluations were performed with a central frequency of 1.6 MHz, using three cycles for CE and a duration of 5, 10, and 20 µs for CCE with a bandwidth of ±200 kHz, ±400 kHz, and ±1 MHz in a phantom simulated with ten targets. The echo signals for the CCE were processed using a matched filter to evaluate the spatial resolution and attenuation. Simulation results demonstrate the flexibility and performance of the proposed GUI for ultrasound excitation studies. The evaluation of CCE with a frequency of 1.6 MHz ± 1 MHz and matched filter improved spatial resolution by 86%. In contrast, a maximum increase in attenuation of the processed signal of 33% was observed.
Keywords: Ultrasound; conventional excitation; chirp-coded excitation; match filter; signal processing
References
Although the economic value of data has received widespread attention, most financial enterprises still have differences in cognition of digital transformation compared with the banking industry, which has always focused on the accumulation of digital capabilities, especially in the understanding and practice of data management systems. Aiming at the topic of how to promote data management capacity building based on the existing digital foundation of enterprises, the author draws a "data governance voyage chart" based on the relevant theory of DMBOK 2.0 and the research of the financial leasing industry, aiming to analyze the dynamic balance theory between the elements of data governance by discussing the dialectical relationship between the five elements of "wind", "tower", "ship", "sail" and "sea", and share relevant thoughts on deepening data governance in the financial industry.
Keywords: Data governance; Financial leasing; Digital transformation
This means that data assets are formally included in the scope of financial accounting, and the attributes and values of assets are accounted for and reflected by the way of enterprise financial accounting, which is of decisive significance for establishing data elements as an important component of enterprise assets, especially intangible assets.
However, due to the special properties of data assets, there are still some difficulties in the process of financial entry.
First of all, we interpret from the perspective of financial accounting: the three major characteristics of assets are: 1. Assets should be owned or controlled by enterprises; 2. The assets are expected to bring economic benefits to the enterprise; 3. An asset is a resource formed by a past transaction or event.
Similarly, data assets must also conform to the above three characteristics, so data ownership is to prove that the asset belongs to the resources owned or controlled by the enterprise. Secondly, the economic benefits that data assets can bring to the enterprise must be measured and accurately calculated.
Problems in accounting recognition of data assets
The rights and responsibilities of data assets are uncertain
How to prove that the data asset belongs to the resources owned or controlled by the enterprise, it is necessary to carry out data ownership. Perhaps the data we use internally can identify the so-called data owner. However, many data are related to data ethics and are not well defined as resources controlled by enterprises, such as owner membership information, business operation information, and so on. Does the data belonging to this part really belong to the enterprise? Is it personal privacy, or is it public data agreed by the government, which needs to be provided free of charge and so on.
The revalidation of data assets is unclear
If we regard data assets as a special category of intangible assets, the capitalization and cost problems existing in the subsequent recognition of intangible assets will also be faced with data assets. There is a difference between capitalized expenditure and expensed expenditure. In the production and operation activities of an enterprise, the consumption of assets is tracked and further refined into capitalized and expensed expenditure. The standard of division is to consider the place of consumption, if at the cost of this part of consumption in exchange for new assets, is capitalized expenditure, if this part of consumption is used for business operation, then this economic benefit outflow, called expensed expenditure. However, the data in a data asset has special properties, and it is difficult to define which data generates value and how much value. So it's harder to define capitalized or expensed expenditures.
The conditions for the confirmation of data assets are not uniform
As we all know, in the existing balance sheet, the conditions for the recognition of fixed assets and intangible assets are clearly defined (for example, when fixed assets are not put into use, they are generally regarded as projects in progress). With these recognition principles, it is possible to better distinguish which intangible assets are, so as to carry out the next work. However, the current research has not made clear the relevant recognition principles of data assets, and the relevant theories have not been perfected. Data quality determines whether to include assets, and similarly, changes, derivatives, destruction, and other actions during the use of data also affect the confirmation of data assets. Then, data collection, storage, processing, cleaning and other stages need to calculate the corresponding accounting value, and clarify the recognition principle of each stage.
Problems in accounting measurement of data assets
Initial measurement of data assets
Subsequent measurement of data assets
Proposals for the development of accounting recognition of data assets
Clearly reconfirm data assets 1. Data assets obtained from external sources. In particular, it should be emphasized that for data assets obtained from the outside, if there is a transfer of ownership or partial ownership during the transaction process, the asset can be confirmed and included in the "data assets" column under the "intangible assets" account. If it does not involve the transfer of ownership, but at the same time as obtaining the right to use the data, it has certain rights such as agency, distribution, resale, etc., which can obtain income through the transaction of the data asset, the enterprise can include it in the asset column. If only the right to use data is obtained, such as having a use license, and the enterprise cannot obtain future income through external transactions, it does not involve the transfer of "data assets" and cannot cause relevant changes in the subject, and the enterprise can only include it in the cost or expense column. 2. Internally generated data assets According to the formation mode of internally generated data assets, the enterprise's own data assets can be divided into active research and development data assets and associated data assets generated with production and operation. In general, according to the relevant provisions on intangible assets, expenditures in the production and research stage should be included in the current profit or loss; Expenditures related to the development stage, which meet the capitalization conditions, shall be capitalized, and research and development expenditures that cannot be distinguished by object shall be fully included in the profit and loss of the current period. However, in the actual operation process, due to the influence of many unforeseen factors in R&D activities, it is not easy to specifically and clearly divide the two stages, which requires special consideration.
Clarify the conditions for the recognition of data assets. The definition of data assets should meet the clear principles. This paper holds that only when the four conditions of realizability, controllability, quantification and identification are met can they enter the statistical work of accounting financial statements and data resources can be regarded as data assets. The relevant validation conditions for data assets are:
The first is the realizability, data assets must be able to bring economic benefits to the enterprise;
The second is controllability. Data assets must be data resources that enterprises can own or control under the premise of compliance with laws and accounting standards.
The third is quantification, that is, data assets need to be able to be separated or divided from the actual production and operation of the enterprise, and can be reliably measured in currency;
Fourth, it is identifiable, that is, data assets can be separated or derived from contractual rights.
Proposals for the development of accounting measurement of data assets
Initial measurement of data assets
Subsequent measurement of data assets
Summary
Although the digital economy has been in full swing and the economic value of data has become increasingly apparent, the accounting treatment of data assets is still a relatively new research topic. We need to truly master the systematic theory of data asset accounting in order to successfully enter data assets into the table.
After eight months of soliciting opinions, the Ministry of Finance of China officially issued the "Interim Provisions on Accounting Treatment of Enterprise Data Resources" on August 21, and it will come into effect from January 1, 2024.
Enterprises shall, in accordance with the relevant provisions of accounting standards for business enterprises, recognize, measure and report transactions and events related to data resources according to the purpose of holding data resources, formation mode and business model, as well as the expected consumption mode of economic benefits related to data resources.
References
The true and real development of the electronics comes with the research on semiconductors, which represent the interphase of all electronic transformation in any material state, even gas state and the interacting with the characteristics and qualities of the environmental and nature ambient to control an electromagnetic fields and produce enough electromagnetic energy, since electric fields can affect semiconductings properties, generating different electric potentials, spin currents, dosage of super-electrons (or Cooper pairs [1]), ionic concentration or even to obtain derived products (in the photon level) from an electromagnetic plasma as different fermion species, even the Majorana fermions structuring the conductor materials according their superconductivity. Although also, the semiconductors are that unique conductor class that acts as either insulator or conductor, depending of the electronic saturation, its interaction with the environmental factors, such that temperature, light, electric currents, magnetic fields, or even humidity. The Universe is a superconductor, and this is there to be used through devices that control and modify its supercurrents.
Python using Object-Oriented Programming
Python is a programming language that is both flexible and easy to learn. It is compatible with the object-oriented programming paradigm, which makes use of classes and objects to help organize and structure code. This method is perfect for applying complex algorithms like Double Machine Learning because it encourages modularity, reusability, and a distinct separation of responsibilities.
Designing the Double Machine Learning Class
To encapsulate the DML process, one must create a Python class that represents the DML algorithm. This class consists of various methods to handle key components of the DML framework, such as data preparation, model training, and treatment effect estimation. By organizing the code in a class, we enhance code readability, maintainability, and extensibility.
Data Preparation
The first step in any machine learning project is data preparation. Our DML class includes methods for loading and preprocessing data. This involves splitting the dataset into training and testing sets, handling missing values, and encoding categorical variables. Utilizing an object-oriented approach allows for easy customization of data preprocessing steps based on the specific requirements of the analysis.
Model Training
The DML class incorporates methods to train both the treatment model and the outcome model. The treatment model predicts the probability of receiving treatment, whereas the outcome model approximates the potential outcomes given treatment status. We employ popular machine learning libraries such as scikit-learn to implement these models within our DML class. The modular structure of the class allows users to experiment with different algorithms and hyperparameters seamlessly.
Double Machine Learning Estimation
The heart of the DML framework lies in the estimation of treatment effects. Our object-oriented implementation facilitates the computation of treatment effects using the fitted treatment and outcome models. This separation of concerns enhances code maintainability and allows users to easily swap models or modify the estimation procedure without altering the entire codebase.
Conclusion
Adopting an object-oriented approach to implement Double Machine Learning in Python brings numerous benefits, including code organization, modularity, and ease of extensibility. The resulting class encapsulates the entire DML process, providing users with a flexible and customizable tool for estimating treatment effects. This approach not only enhances code readability but also fosters collaboration and code sharing in the growing field of causal inference. As machine learning and causal inference continue to intersect, an object-oriented DML implementation in Python proves to be a valuable asset for researchers and practitioners alike.
References
Double Machine Learning (DML) is a powerful framework that combines the flexibility of machine learning with the robustness of statistical inference. It is particularly useful in settings where treatment effects are of interest, such as in econometrics and causal inference. In this article, we explore an object-oriented approach to implementing Double Machine Learning using Python, leveraging the simplicity and modularity of object-oriented programming (OOP) principles.
References
In recent years, there has been an increasing demand for the optimization of alloy properties, driven by the growing complexity of end products and the need to reduce development costs. In general, Thermo-Calc based on the CALPHAD method, which calculates the thermodynamic state of an alloy, is widely used for efficient alloy development. However, a challenge in alloy exploration using Thermo-Calc is the need for specialized computational skills and the significant computational effort required due to the extensive range of calculation conditions for numerous alloys. Consequently, we have developed a deep learning model that rapidly and accurately predicts the temperature-dependent changes in equilibrium phase fractions for 6000 series aluminum alloys (Al-Mg-Si based alloys), which are widely used in industry, using calculations from Thermo-Calc. We developed the architecture of the deep learning model based on the Transformer, which is commonly used in natural language processing tasks. The model is capable of performing calculations more than 100 times faster than ThermoCalc. Furthermore, by leveraging backpropagation of errors in the trained model, we developed a method to estimate the alloy composition for the phase fraction results calculated based on Thermo-Calc.
Keywords: Deep Learning; Inverse Problem; CALPHAD; Transformer
References
Water Dump Flooding is less well known for revitalizing mature fields. However, in the Boca Field, specifically Reservoir 95 Y-102, this is exactly what occurred. A periodic review of this mature field suggested abandoning the only producing well in this reservoir, well X-3, because an adjacent well, X-6, located above the dip was known to produce a 99% water cut.
Although other wells were produced in the reservoir, only a 12% recovery was achieved. Therefore, an integrated study to re-evaluate the parameters and properties of Reservoir 95 Y-102 began in 2005. During the well analysis, it was found that the water production of well X-6 was the result of the communication behind the casing of the well with the underlying aquifer 101 and not because of the advancement of the oil-water contact, as initially suggested. Recompletion of well X-3 was recommended because an injection process known as dump flooding was underway. In addition, aquifer support for production over the previous seven years strongly indicated that dump flooding would produce the desired production increases.
Under sub-optimal conditions, accidental Water Dump Flooding rejuvenated the producing well, increasing production to more than 300 BOPD with an acceptable water cut of 61%. The steps followed for the analysis and understanding of the process that occurred and how we took advantage of this accidental Dump Flooding raised the production from nearly zero to over 100,000 barrels produced in a single year. This lays the foundation for using Dump Flooding as a production and development strategy for other projects in the area.
References
The method of calculation of phase diagrams (CALPHAD) is a calculation method that searches for a state of providing minimal Gibbs energy as an equilibrium state. To perform a thermodynamic equilibrium calculation for a single material composition and to predict a phase diagram, we can complete the CALPHAD method calculation within a realistic time. However, screening many material compositions associated with predicting the corresponding phase diagrams takes much time. For alloy materials, for example, it would take 161 hours to calculate phase diagrams of all alloy compositions to screen 10,000 sets of explanatory variables, i.e., compositions and manufacturing conditions, since it takes 58 seconds to calculate each set. The present study aims to provide a calculation device, method, and program for quickly predicting the thermodynamic equilibrium state. We developed a deep learning model based on the Transformer architecture to achieve this objective, primarily used for various natural language processing tasks, such as machine translation, text summarization, question answering, and sentiment analysis. The encoder part of our developed model extracts the necessary features for phase diagram prediction from the inputted alloying elements. In contrast, the decoder part predicts a phase diagram for each temperature based on the results from the encoder. We calculated 800,000 species using the CALPHAD method and employed these data to train our developed model. Our trained model can calculate thermodynamic equilibrium states more than 100 times faster than the CALPHAD method and correctly reproduce the phase diagrams of ground truths. Based on the present result, we could invent a calculation device, a calculation method, and a calculation program for predicting the thermodynamic equilibrium state in a short time.
Keywords: Thermodynamic equilibrium state; Transformer; Neural network; Deep learning; CALPHAD
References
The VITO(pn 20150100457, 2015) is a novel training kit that has been designed to be portable, light, mobile, and non-expensive, adjusted to the clinical needs of both beginners and advanced surgeons allowing a continuous and systematic self-centered and/or collaborative training in Surgical Endoscopy. The VITO(pn 20150100457, 2015) assisted endoscopy training kit impact on the learning curve in terms of the ‘’machine learning curve’’ which refers to the training process and computation of the time point considered as that, that an endoscopic operation is learned in the training process of Endoscopic Hemostasis is initially evaluated.
Keywords: Human System Interaction; Training Kit; Surgical Endoscopy; Machine Learning Curve; Endoscopic Hemostasis
References
The study explores how Virtual Reality (VR) applications address the challenges of conveying tactile feedback in traditional calligraphy practice. It examines the feasibility of VR in simulating calligraphy's tactile aspects, including brush strokes, ink flow, and paper texture, and the importance of tactile feedback in learning calligraphy. The challenges in simulating realistic tactile sensations in VR are discussed, alongside current technologies in haptic feedback, strategies for simulating calligraphy tools, and case studies of VR calligraphy applications. It concludes with future directions for improving tactile feedback in VR calligraphy training.
Keywords: Virtual Reality; Tactile Feedback; Calligraphy; Haptic Technology; Educational Technology
According to the findings of many researchers [1-3], for the design of RC structures taking into account the requirements of ensuring their strength and durability, the principle of safety can be realized to the maximum level only if the following fundamental issues are further developed:
One of the most important tasks in calculating the reliability of RC structures is the selection and justification of probabilistic models of random variables. This task in practice is significantly complicated by the data uncertainty, obtained as a result of a lack of statistical information. Uncertainty, in this case, represents material properties (for example, strength of reinforcement or concrete), external loads, geometric dimensions, operating conditions, etc. [4, 5]. These uncertainties, if ignored, can lead to low reliability of engineering structures and even catastrophic consequences (especially relevant for buildings of the CC3 consequence class). To solve this problem, the reliability theory of building structures was developed.
All reliability indicators that can be used in formulating normative requirements for structures are functions of the failure probability over a certain time. Therefore, the main task of probabilistic calculations is to calculate the failure probability.
When random changes in input parameters (variables) are insignificant (up to 20 %), we can use the statistical linearization method. In this case, to calculate the function’s statistical characteristics, it is linearized by expanding into the Taylor series at the point of its mathematical expectation. However, no random variables distribution function is strictly linear and since the error permissible value in the nonlinear systems failures calculation depends not only on the number of input random variables but on the volumes of their statistical samples - the search for an analytical solution using this method will be almost impossible.
On the contrary, the method of statistical simulation (so-called Monte Carlo methods) - the universal method for calculating a wide class of probabilistic problems - is especially effective for nonlinear systems. The main idea of these methods consists of the sample (based on the statistical distribution) construction for each random variable involved in the task - and as these methods deal with the simulation of the limit state function - the larger the sample is taken, the more accurate the structure's failure probability will be [6].
As a result, the choice of random variable model probabilities for further calculation of the reliability of structure members will depend on the amount and type of statistical information obtained about the random variable. It can be added that the promising and relevant directions in the development of probabilistic models of random variables and methods for analyzing the reliability of RC structures (especially with incomplete statistical information) are the following:
References
The progress of probabilistic approaches to assessing the structural safety of load-bearing members (including reinforced concrete) of buildings and structures is a highly relevant scientific problem. Moreover, with the progress of digital technologies and the increasing capabilities of numerical calculations today, there is also an urgent need for the development of stochastic approaches in construction - based on mathematical statistics, theories of probability, and reliability.
Reference
The COVID-19 pandemic has emerged as a pivotal moment for the pharmaceutical industry, catalyzing profound shifts in market dynamics and operational strategies. This research paper delves into the nuanced ramifications of the pandemic on the pharmaceutical sector, with a focused examination on three prominent entities: Sun Pharma, Cipla, and Divi's Labs. Through a comprehensive analysis, this study endeavors to elucidate whether the COVID-19 crisis has posed a formidable challenge or presented opportunities for growth within these pharmaceutical companies.
The objectives of this research are threefold. Firstly, it aims to investigate the impact of the COVID-19 pandemic on the profitability of pharmaceutical companies in India. Secondly, it seeks to scrutinize the fluctuations in share prices of Sun Pharma, Cipla, and Divi's Labs before and after the onset of the pandemic. Lastly, it endeavors to discern the relationship between key financial metrics such as revenue, profit, return on investment (ROI), and the corresponding share price performance of these companies.
Employing a combination of quantitative methodologies and qualitative insights, this study offers valuable perspectives on the resilience and adaptability of the pharmaceutical industry in the face of unprecedented global challenges. By unraveling the intricate interplay between pandemic-induced disruptions and market dynamics, this research contributes to a deeper understanding of the evolving landscape of the pharmaceutical sector amidst the COVID-19 crisis.
Keywords: COVID 19 Pandemic; Pharmaceutical Industry; Return on Investment (ROI); and Share Price Fluctuations
References
When steel structures experience short duration overloads, for instance wind burst or a breaking wave, the structure might yield for a short moment and after this yielding event the reduction of stiffness could be minimal. The result is mainly a permanent deformation, which is normally detected using tilt meters. But since tilt meters are measuring angles at the location of instrumentation, and in some cases, yielding does not result in permanent changes of tilt angles, the yielding must in these cases be detected by other means. In this paper we are shortly discussing the use of the short duration changes of natural frequency and damping following ideas from the earthquake cases of period elongation. However, due to the strong influence of the external forces on the estimation of these changes, other effects are considered such as quasistatic displacement movement, slamming effect of the overload, permanent displacement using low frequency signals, and mode shape changes using principles from stochastic subspace identification. This defines a set of five non-linear detection (NLD) indicators that are studied on a case of possible yielding in a wave loaded offshore structure using simulation.
Keywords: Short duration overload; permanent deformation; quasistatic displacement; Bloop; SSI null space; wave loading; offshore structure
References
In an era characterized by swift digital economy development, strengthening traditional industries and aiding industrial agglomeration with digital economy. This paper examines the effects and process of the digital economy on industrial agglomeration using by developing an indicator system, employing the entropy weight approach to assess the digital economy, and industrial agglomeration is measured using the location entropy approach. According to research, the digital economy can stimulate the establishment of industrial agglomeration, and this boosting influence is particularly visible in locations located in the eastern region, where marketization, population density and technological innovation is significant. The development of the circulation industry plays a significant promoting role in the process of promoting industrial agglomeration through the digital economy. Therefore, we should actively develop the digital economy and boost infrastructure building; Promote the combination of online and offline industrial clusters, actively build online clusters, shorten the geographical distance between enterprises, and accelerate industrial integration; Develop the circulation industry, reduce production and transportation costs, enhance urbanization level, and attract capital to assist in industrial agglomeration.
Keywords: Digital economy, Circulation industry, Industrial agglomeration, Regulation effect
.
We are using internet and internet regained information everyday to meet our elementary needs and acting like skilled digital professional. Industry needs more professional to code more business happenings to grow and able to branch out. In such development of today’s digital economy it’s sometime difficult to get people with right mix soft and hard skills. Some research studies brought on surface that many more youths worldwide don’t have basic digital skills. Every youth should be ready for change with a set of career goals, strategies and options based on their interests, personality, values and skills both hard and soft. The most important point is that throughout your life you will play a combination of study, work and citizen roles that are intermixed.
References
Existence constraints were defined in the Relational Data Model, but, unfortunately, are not provided by any Relational Database Management System, except for their NOT NULL particular case. Our (Elementary) Mathematical Data Model extended them to function products and introduced their dual non-existence constraints. MatBase, an intelligent data and knowledge base management system prototype based on both these data models, not only provides existence and non-existence constraints, but also automatically generates code for their enforcement. This paper presents and discusses the algorithms used by MatBase to enforce these types of constraints.
Keywords: Existence and Non-Existence Constraints; The (Elementary) Mathematical Data Model; MatBase; Database Design; Non-relational Constraint Enforcement
Reference
The adoption of Artificial Intelligence (AI) within the realms of project and portfolio enterprise management (PPEM) is revolutionizing the domain of project management. AI's potential to boost efficiency, productivity, and decision-making marks a significant shift, necessitating a new set of skills and competencies for project management practitioners. This ushers in an era marked by enhanced capabilities and novel challenges. With AI automating routine operations and offering insights based on data, it's imperative for project management professionals to adapt and evolve to maintain their critical role in this AI-influenced epoch. This study delineates five essential success factor dimensions essential for project management professionals to excel in this evolving landscape. These include deep industry knowledge, proficiency in core project portfolio enterprise management (PPEM) processes, fundamental coding skills, expertise in data visualization, and proficiency in data science. Developing these competencies enables project management professionals to not only excel in the era of AI but also play a pivotal role in shaping the future of project portfolio management.
Keywords: Artificial Intelligence; Project Management; Finance; Critical Success Factors; Industry Expertise; Core Processes; Coding Skills; Data Visualization; Data Science Acumen
References
On a continent that is frequently portrayed in a condition of permanent crisis, development appears to be an impossibility. In fact, observers of African affairs, especially those in the West, cannot feel reassured by recent military takeovers and armed conflicts like those in Sudan and Gabon that Africa is rising—a claim once made by influential figures in world opinion like The New York Times, The Economist, and others. It appears that development is in critical need of an immediate revival. To put things in perspective, the Organization for Economic Co-operation and Development (OECD) reports that official development assistance (ODA) reached a total of USD 185.9 billion in 2021. However, the depressing results of development demonstrate the futility of international development. For instance, since 2019, the majority of the nations receiving aid from abroad have seen increases in their rates of poverty, with 50% to 70% of their people living below the poverty line (1,2).
Situations around the world are not promising. The World Bank estimates that in 2022, there will be over 700 million individuals worldwide who are living in extreme poverty. The UN's most recent SDG 2023 progress report (1.3) presents a dismal picture. On almost 50% of the targets, there has been insufficient and weak development. Even worse, almost 30% of the SDG targets have seen either a standstill in development or a reversal in them. This contains important goals about hunger, poverty, and the environment. Moreover, the research finishes on a very concerning note: over half of the world is falling behind, and most of those falling behind reside, you guessed it, in the Global South.
Artificial intelligence (AI) is being positioned as a useful tool for accelerating development objectives and targets and repairing the flawed international development paradigm as the global development agenda suffers. International development organizations and regional partners have implemented innovative AI for development (AI4D) initiatives in a number of African nations, including those in Sub-Saharan Africa and West Africa. With all of the hype around artificial intelligence, this seems like a reasonable and necessary endeavor. However, the deficit model of development serves as the foundation for AI initiatives in Africa. This deficit argument highlights how the lack of human and technological capability is the direct cause of the Majority World's inability to progress.
In an effort to maximize the amount of electricity available, the Responsible AI Lab (RAIL) in Ghana (1.4) is attempting to integrate efficient energy distribution models into the system. Natural language processing (NLP) is arguably one of the most promising uses of AI in the region. Emerging start-ups using development funding programs like the Lacuna Fund are attempting to create language models for indigenous African languages like Igbo, Hausa, Yoruba, Twi, Akan, and others. These models can be integrated into further applications in fields like education and healthcare. Given the regional circumstances in the majority of African nations, the advantages of these programs and apps may be obvious.
Actually, though, large multinational corporations' CSR programs (4) and the policies of international development organizations have a significant influence on most AI development in Africa. In an effort to become future bright spots in the field of technology, these initiatives which are carried out in partnership with Big Tech and regional players like scientists and practitioners are unduly focused on developing technological solutions and local African datasets. Much time and money are being spent collecting local datasets so that machine learning models for predictive analysis can be updated based on the local context.
But how much is known about the goals and applications of these AI programs, and which social groups and communities stand to benefit from them? How will the local context respond to these technology solutions? To put it bluntly, there isn't enough deliberate interaction with the political imaginations of the various local communities in terms of their aspirations for an AI-powered technological future1.
References
Aiming at lack of general Reliability Evaluation that exists in executing safety measures for protection device maintenance operating with relying on habit and showing diversity, this paper gives a general reliability evaluation model of second safety measures by using of two dimensions on reliability and complexity and point to the specific maintenance task, it shows optimal method which gives improvement to present working method. The model takes count of the habit from maintenance firm, its application can be feasible.
Keywords: reliability; complexity; secondary safety measures; relay protection; function link
The dawn of 5G technology marks a significant milestone in the realm of wireless communication. With its ability of ultra-fast speeds, low latency, and massive connectivity, 5G has the potential to revolutionise industries ranging from healthcare to manufacturing. Its deployment is paving the way for autonomous vehicles, augmented reality experiences, and smart cities, ushering in an era of unprecedented connectivity and innovation.
Moreover, the rise of IoT devices is driving the demand for more efficient and reliable wireless networks. These interconnected devices, ranging from smart home appliances to industrial sensors, are generating vast amounts of data that need to be transmitted and analysed in real-time. As such, there is a growing emphasis on developing wireless technologies capable of supporting the massive scale and diverse requirements of IoT applications.
In addition to 5G and IoT, other emerging wireless technologies such as Wi-Fi 6 and Li-Fi are also poised to make a significant impact. Wi-Fi 6, the latest repetition of the Wi-Fi standard, offers higher speeds, increased capacity, and improved performance in dense environments. This technology is set to enhance the connectivity experience for users across various settings, from homes to public spaces.
On the other hand, Li-Fi signifies a novel advance to wireless communication using light waves instead of radio frequencies. By harnessing visible light to transmit data, Li-Fi offers potentially faster speeds and larger security compared to traditional Wi-Fi. While still in its infancy, Li-Fi holds promise for applications where radio frequency interference is a concern, such as in hospitals and aircraft cabins.
Despite the immense potential of these technologies, challenges remain in their widespread adoption. Issues such as spectrum insufficiency, interoperability, and cybersecurity must be addressed to fully realise the benefits of wireless connectivity. Moreover, there is a need for continued research and collaboration to overcome technical barriers and ensure that these technologies are accessible to all.
In conclusion, the latest advancements in wireless technologies hold the promise of transforming how we communicate, collaborate, and interact with the world around us. From 5G networks to IoT devices, the possibilities are endless, and the opportunities for innovation abound. By embracing these technologies and addressing the challenges ahead, we can pave the way for a more connected and prosperous future.
In today's fast-paced world, where connectivity is paramount, wireless technologies have emerged as the cornerstone of modern communication. From the Internet of Things (IoT) to 5G networks, the landscape of wireless technology is constantly evolving, presenting both challenges and opportunities. In this editorial, we explore into the latest advancements in wireless technologies, examining their influence on various industries and envisioning their future route.
References
The airline industry faces the persistent challenge of flight delays, resulting in financial losses and reduced customer satisfaction. Delta Airlines, with its primary operations centered at Hartsfield-Jackson Atlanta International Airport, encounters difficulties in effectively predicting and managing these delays. Accurate delay prediction is essential for optimizing operational efficiency, improving passenger experience, and maintaining competitiveness within the airline industry. This project focuses on the development of predictive models to forecast flight delays for Delta Airlines departures from this major hub by leveraging historical flight data. Through the use of machine learning and predictive models, the study aims to provide valuable insights and strategies for Delta Airlines to enhance strategic planning, regulatory compliance, and overall operational performance. The results of this project have the potential to contribute significantly to the airline’s efforts in mitigating flight delays and improving customer satisfaction.
Keywords: Prediction; Delay; Flights; Airlines; SARIMA; Prophet; LSTM
References
Today, the retrieval of goods, production waste, and packaging is a common phenomenon that both traditional and online manufacturers, wholesalers, retailers, as well as logistics service providers frequently have to deal with. In the e-commerce, the retrieval of products plays a critical role in improving customer satisfaction. Consequently, numerous businesses and scholars are keen on comprehending the role of reverse logistics operations. This research has a primary focus on investigating how reverse logistics impacts the shopping experience of e-commerce customers. The study employed empirical methods and quantitative analysis with a sample size of 203 observations. The study’s findings demonstrated a positive correlation between different aspects of reverse logistics and the shopping experience and satisfaction of e-commerce customers. Moreover, it was revealed that the shopping experience acts as a mediator in the relationship between reverse logistics and customer satisfaction. Based on these findings, this research seeks to assist businesses in enhancing the quality of their reverse logistics services and optimizing overall logistics operations in e-commerce.
References
The gradual decline trend of oil resources and increasing global warming around the world have created an urgent need to search for alternate options for crude oil. Electric Vehicles (EVs) can counter the need for crude oil but they have range anxiety. Hybrid Electric Vehicles (HEVs) have proved to be a viable option for ensuring improved fuel economy and reduced emissions. The performance of the vehicle, energy consumption, and emissions depend upon the selection of different vehicle topologies.
Before manufacturing an actual HEV prototype and testing the same in the laboratory, on test tracks, and the actual field, it is important to give an appropriate consideration towards the modeling of it in a simulation environment. There exist three main stages of computational modeling in the development activity of HEVs, viz., model in the loop (MiL), software in loop (SiL) and hardware in the loop (HiL). Development of a MiL can further be classified into three main modeling approaches, viz., kinematic modeling, quasi-static modeling, and dynamic modeling. The development of a virtual simulation model is a pre-requisite for the development of an efficient control strategy for HEVs, which ultimately leads to an optimized load-leveling amongst the power plant. This paper presents a brief review of the above-mentioned modeling approaches of HEVs. The research work describes a blend of forward and backward modeling approaches for a full parallel hybrid electric powertrain. Finally, the results of fuel consumption and energy management are discussed in detail.
Keywords: Hybrid Electric Vehicle; Modelling; Simulation
References
This study investigates the influence of digital marketing on the purchasing behaviour of students aged 18-25 on Instagram. Utilizing a questionnaire approach with 81 respondents and percentage analysis using spreadsheet, the research reveals a significant impact of Instagram marketing on student purchase decisions. Despite extensive usage of the platform, students are not primarily shopping on Instagram, possibly due to high product prices and a plethora of choices. The study suggests that businesses could incentivize purchases by offering discounts and flexible payment options. The research also uncovers that students frequently interact with brand story ads, particularly those with visually appealing content. Brands employing captivating graphics, trending music, and current trends in their ads have a higher likelihood of attracting students. Informal promotion strategies, such as the use of memes and trending news, have proven effective in capturing audience attention. The study further highlights the role of influencer product reviews in enhancing trust and authenticity, thereby influencing purchase decisions. Brands can capitalize on this by promoting their products through influencers and celebrities for greater reach and engagement. Lastly, the research underscores the critical role of customer feedback in the purchasing process. Brands are advised to regularly review customer feedback, address concerns, and provide clear instructions and appropriate compensation. This comprehensive understanding of student behaviour on Instagram can guide brands in devising more effective marketing strategies.
References
The technology of Block chain is fundamentally a record of the distributed database or it is a public ledger of all the dealings or proceedings that are executed digitally and shared with other entries that are participating. Every transaction made in the public ledger is certified by mutual agreement of all the contributors in the arrangement. And after the entry of the information, it can never be erased. Each transaction made in the system can be easily verified and recorded in the case of Block chain technology.
I propose a secure protocol system for Nodes or Blocks present in the Block Chain network.
The numbers of Node will be connected into single network which will be called as Block.Entire data will be passed through this Networks therefore more possibility to hack this network by the inturder and that may cause a major defeat. Accordingly by implementing this propose system it is possible to avoid this loss.
In Block Chain network node communications consistently get coped and illegal operation performs if it happened then there will be a probability to go for massive catastrophe. So ,that security in protocol of Block need to be improve and implement so that the proposed article will help to cross check the sender and receiver node which keep the security up and maintain the variation in data handling.
.
Probabilistic graphical models are a common framework for modeling the joint probability distribution of random variables that uses parameterized graphical structures to more compactly and comprehensively represent probability distributions and independence relationships between variables. In this regard, various presentation methods (such as Bayesian networks, Markov networks, Template-based methods), approximate and exact inference methods, as well as structure learning methods and examples of the applications of these models in applications of image processing, audio processing, text processing, bioinformatics problems expressed. These topics are very important and many problems can be solved by learning them. For this purpose, the book “Probabilistic Graphical Models” by Daphne Koller and Nir Friedman, which covers most of the mentioned topics, can be introduced to those interested.
References
The commonality between interdisciplinary and transdisciplinary fields is the integration of knowledge and methods from different disciplines to solve complex practical problems. As a storage and dissemination center for knowledge resources, libraries have abundant multidisciplinary resources that can provide support for interdisciplinary and transdisciplinary development, and will also play an important role in the new knowledge environment. At the same time, libraries also face some challenges, such as the issues of disciplinary boundaries and information integration, the diversity and complexity of interdisciplinary and transdisciplinary research needs, and how library technology and resources can be better updated and adjusted to adapt to their own sustainable development. Libraries should address these challenges, achieve deep level changes in knowledge organization and discovery, build new knowledge service systems, and apply them to the construction of future learning centers.
Keywords: interdisciplinary; transdisciplinary; knowledge services; knowledge organization; knowledge discovery
References
Artificial intelligence (AI) technology is significant in modern daily life. It is so influential that many consider technology the cornerstone of this era. Even in agriculture, there is a new concept known as Smart Farming. In a recent study, deep learning was adapted for predicting and detecting estrus in cows by adjusting the parameters of the deep learning model. A Convolutional Neural Network utilizing the Artificial Immunity System algorithm was employed to optimize the hyperparameters. The results of this optimization showed an accuracy of 98.361%. YOLOv5 deep learning was also used to detect real-time estrus, with mAP50 = 0.995, mAP50-95 = 0.887, and F1-Score averages = 0.993.
Keywords: dairy cows; deep learning; estrus prediction; image
References
Due to the special geological and climatic conditions in Yunnan Province, natural disasters occur frequently. This study takes the prefabricated buildings in post-disaster reconstruction in Zhaotong, Yunnan Province as the research object, and uses the research methods of literature review, questionnaire survey, on-the-spot investigation and expert interview to explore the present situation and challenges of the operation and maintenance management of prefabricated buildings. It is found that prefabricated buildings are widely used in post-disaster reconstruction because of their rapid construction and cost-effectiveness, but at the same time, there are problems such as structural safety hazards, equipment maintenance difficulties and insufficient technical level of operation and maintenance personnel, which affect the long-term use safety of buildings and the quality of life of residents. In order to improve the safety and living comfort of prefabricated buildings, this paper puts forward some improvement measures, such as establishing regular inspection and maintenance system, introducing intelligent monitoring technology and perfecting technical training system, hoping to optimize operation and maintenance management, prolong the service life of buildings, improve the quality of life of residents and provide more effective technical support for post-disaster reconstruction areas.
Keywords: prefabricated buildings; post-disaster reconstruction; operation and maintenance management
References
Glaucoma is one of the leading causes of vision loss worldwide. Glaucoma cannot be cured in its advance stages. So, early detection of disease has become an important factor in the medical field. Numerous studies quickly became clear that using different image processing methods, the retinal fundus picture could be uncovered. In this study, many automated glaucoma detection techniques were thoroughly reviewed various papers were compared on the basis of the methodologies they adopted for detecting glaucoma from 2D fundus images created using CDR. 85% of glaucoma cases can be accurately detected by the majority of machine learning algorithms. First, image segmentation techniques like Elliptical Hough transform and edge detection gave the region of interest, i.e. optic disc and cup. These extracted images were then given to the machine learning and deep learning models to detect presence of glaucoma in the fundus image of the eye. The most significant deep learning, machine learning, and transfer learning methods for analyzing retinal images were reviewed, along with their benefits and drawbacks.
Keywords: Glaucoma detection; machine learning; deep learning; segmentation; neural network; Image processing; Optic Disc detection; Optic disc; Optic cup; Disc Ratio of optic cup (CDR); Fundus image
References
This research aims to prioritize innovation capability and spatial variation of national high-tech zones in China based on the catastrophe progression method. The first step is to establish a feasible index system for assessing the innovation capability of high-tech zones; after that, it is to Evaluate the innovation capability of 169 national high-tech zones in China using the Entropy Weight Method (EWM) and the Catastrophe Progression Method (CPM), then use the weighted average method to convert the innovation capability evaluation results of 169 high-tech zones into values for each province's high-tech zones in China. The last step utilizes visualization tools for spatial variation analysis.
The research results found that a comprehensive innovation capability evaluation system has been constructed, consisting of levels 1, 2, and 3, which have 4, 8, and 28 indicators, respectively. The evaluation results reveal that prioritizing provinces regarding innovation capability and spatial variation of high-tech zones for the top three are 1) Beijing, 2) Shanghai, and 3) Guangdong. At the same time, the bottom three are 167) Hainan, 168) Qinghai, and 169) Ningxia. From the priority and using the visualization, results indicate that High-tech zones in eastern China found that (Beijing, Shanghai, and Guangdong) have significantly higher innovation capabilities than those in central and western regions due to richer resources, advanced infrastructure, and more substantial policy support. Central regions (Wuhan, Hefei) also show high capabilities from recent investments and government support, while western areas generally lag, needing improved infrastructure, increased investment, and more substantial policy support.
Keywords: prioritize innovation capability and spatial variation; high-tech zones in China; Entropy Weight Method (EWM); Catastrophe Progression Method (CPM)
As the current tendencies, composite and sandwich structure research is anticipated to move towards more integrated, multifunctional, and sustainable solutions that answer to the industry needs. Multi-material and hybrid structures aim to integrate different materials, such as composites with metals, ceramics, and polymers, to provide unique features [7]. Embedding sensors, utilizing data analytics, and applying machine learning algorithms are proposed techniques for structural health monitoring (SHM) [8]. Developing advanced coatings, surface treatments, and materials designed to withstand UV exposure, moisture, and chemical agents, will assist in increasing the durability [9]. Exploring additive manufacturing (3D printing), automated layup methods (including automated fibre placement and tape laying), and novel curing techniques is expected to boost production efficiency and reduce costs [10]. In the numerical point of view, creating multi-scale models that accurately represent the complex behaviours of composites, is expected to provide new design and predictive possibilities [11]. Incorporating functional materials in sandwich structures provide adaptive, self-healing, or sensing capabilities to composite structures [12]. Improvement in eco-friendly manufacturing processes, the development of biobased resins, and the creation of recyclable composite and sandwich structures, alongside improving methods for material recovery and reuse, is also expected soon [13].
References
Composite materials have become essential across various industries, including aerospace, automotive, construction, sports equipment, and electronics. These materials excel in strength-to-weight ratio, resistance to fatigue and corrosion, and low thermal expansion. The flexibility to customize composites by altering the type, size, and orientation of the reinforcement, along with the matrix material used, enhances their utility [1]. However, challenges such as high manufacturing costs, complexities in repair, vulnerability to delamination and other forms of damage, and difficulties in characterization and modelling present significant barriers [2]. However, it should be emphasized that numerical modelling has advanced significantly, particularly in simulating the complex behaviours of composite materials under various conditions. Nanocomposites aim to improve strength, stiffness, thermal and electrical conductivity, thereby expanding their application range [3]. Simultaneously, the push for sustainability has led to the development of eco-friendly composites that utilize renewable or recycled materials to minimize environmental impact. Bioinspired composites, inspired by natural materials like spider silk, seashells, and bone, aim to replicate their unique properties. Multifunctional composites, which incorporate materials such as shape memory alloys, piezoelectric elements, and carbon nanotubes, allow the creation of smart systems with capabilities for self-monitoring, adaptive responses, and energy harvesting [4]. Composite sandwich structures are widely used in industries like aerospace, automotive, marine, and construction due to their excellent strength-to-weight ratio, stiffness, and durability [5]. Innovations in materials, such as fibre-reinforced polymers and metal matrix composites for the face sheets, along with cutting-edge core materials like foams, honeycombs, and lattice designs, are being explored. To further enhance these structures, advanced core materials, including 3D-printed lattice configurations and bioinspired designs, are being developed to increase impact resistance, energy absorption, and overall structural integrity [6]. Understanding how sandwich structures behave under dynamic conditions such as impacts, blasts, and vibrations is essential.
References
The study aims to investigate the current precision agriculture technologies adoption in Hong Kong and construct a model of adoption. It is the first comprehensive study of precision agriculture technologies adoption utilizing both grounded theory approach and quantitate studies. The study began with open-ended interviews with farmers in Hong Kong on their perceptions about the use of precision agriculture technologies in their farms. Using grounded theory approach, the research team identified predictors of their adoptions. In the second phase, the research team will develop and administer a survey and test the adoption model.
Keywords: precision agriculture technology; harvest automation; information technology adoption; modern farming management
References
Getting stuck in a traffic jam by car and then not finding a parking space? That's a horror scenario for every car driver!
How easy would it be in a fairytale world? If there is no parking space at your destination, you would simply conjure the car away and then conjure it back when it is needed again. ‘Magic away and magic back’ is the basic idea of LiMo translated into reality. A parking space is no longer needed because the vehicle ‘dissolves’ at its destination. It disassembles into its cabin and chassis modules. The cabin disappears ‘as if from the ground’ by lifting off and docking itself to its own home. The chassis virtually disappears because it can be parked to save space.
LiMo is a completely new living and vehicle concept with the potential to revolutionise urban mobility. It combines the advantages of a private car (own passenger compartment) with the traffic-related benefits of car sharing - all this in a convenient (barrier-free), sustainable (parking spaces become green spaces!) and cost-effective (no underground car parks required) way. The focus is on a multifunctional cabin that is used around the clock, either as a vehicle cabin, as a lift cabin or as a living space extension (mini conservatory).
Keywords: car sharing; sustainable urban mobility; modular vehicle concept
References
Photovoltaic (PV) power generation is an essential form of renewable energy. A grid-connected PV inverter is the core equipment of a grid-connected PV power generation system. Based on the working principle of a high-power PV grid-connected inverter, the design of a 500 kW PV grid-connected inverter system is considered as an example. The equipment selection and parameter design methods of critical components, such as DC support capacitors, DC to AC modules, inductors, and capacitors, are introduced, and the overall system control strategy scheme and maximum power point tracking strategy are proposed. The results of MATLAB system simulation and field measurement experiments show that the control system can ensure that the output three-phase voltage and current are always in the same phase and frequency and that the output power is stable, fully meeting the grid connection requirements. In addition, the system has high conversion efficiency, good harmonic suppression, and a good MPPT tracking effect based on the particle swarm algorithm, which has high application and promotion value.
Keywords: Grid-connected inverter; Harmonic; System Design; MPPT; Conversion efficiency; Particle swarm algorithm
References
(In this paper, the aim of industrial robots is multifaceted, encompassing several key goals for manufacturers: Increased Productivity and Efficiency: Robots can work tirelessly without breaks, performing tasks much faster and more consistently than humans. This translates to higher production output and shorter lead times. Enhanced Precision and Quality: Robots excel at repetitive tasks with pinpoint accuracy, minimizing errors and ensuring consistent product quality. This is crucial for industries like electronics and pharmaceuticals, where precision is paramount. Reduced Costs: While the initial investment in robots can be significant, their long- term cost-effectiveness is undeniable. They reduce labor costs, minimize material waste, and require less maintenance than human workers. Improved Safety: Robots can safely handle hazardous materials and perform dangerous tasks, reducing the risk of injuries and fatalities on the shop floor. Greater Flexibility and Adaptability: Modern robots are becoming increasingly versatile, capable of handling different tasks and adapting to changing production needs. This flexibility allows manufacturers to respond quickly to market demands and customize their products more easily. As a simple introduction to the idea of the project, it works as a mine and gas detector. Due to the impossibility of obtaining a mine sensor, only the metal sensor was added in the experiment, as it was programmed using an Arduino and NRF on both the transmitting and receiving sides to control its movement wireless. To work in forbidden areas between countries to detect mines, in addition to connecting a harmful gas detection circuit to sense if there is a harmful gas to sound a siren, and also connecting a metal detection circuit in case it detects metal, which will also sound a siren).
Keywords: Exploration Robotics; ESP Camera; Industrial Robot; Mine Detection Sensor; Military Application
References
Phosphorene, a two-dimensional material, has garnered significant attention for its promising applications in optoelectronics due to its unique electronic properties. In this study, we employed Density Functional Theory (DFT) calculations, using the Quantum Espresso package, to investigate the electronic structure of phosphorene with an orthorhombic structure. The calculations utilized ultrasoft pseudopotentials and the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional, with a k-point mesh of 20 × 20 × 1 and a vacuum of 10 Å along the z-axis to mitigate interlayer interactions. Our results revealed a direct band gap of Eg = 0.9 eV at the Gamma point, as confirmed by both the band structure and density of states (DOS) analyses. This direct band gap is particularly advantageous for optoelectronic applications such as light-emitting diodes (LEDs) and photodetectors, where efficient electron-hole recombination is crucial. The high density of states near the band edges suggests enhanced optical absorption and emission properties, making phosphorene a promising candidate for next-generation photodetectors and solar cells. Our findings provide a deeper understanding of the electronic properties of phosphorene, highlighting its potential for various optoelectronic applications.
Keywords: Phosphorene; DFT; Electronic properties
As remote work has become the norm, businesses that embraced cloud solutions early on found themselves at a distinct advantage. With employees accessing company resources from their homes, businesses could ensure continuity despite unprecedented challenges. This shift not only underscored the necessity of cloud storage but also accelerated its adoption across various industries. The adaptability and resilience that cloud solutions provide have become essential attributes in a rapidly changing business environment.
Another appealing aspect of cloud storage for businesses is its scalability. Unlike physical storage, which requires substantial upfront investment and ongoing maintenance, cloud storage operates on a pay-as-you-go model. This allows companies to adjust their storage needs according to demand without incurring unnecessary costs. Startups and small businesses, in particular, benefit from this model, as it lowers barriers to entry and equips them with tools previously reserved for larger enterprises. This scalability levels the playing field, fostering innovation and competition.
Moreover, cloud providers offer a range of services, including automatic backups and data recovery measures, which are crucial in today’s data-driven world. These features enable businesses to focus on their core competencies. By outsourcing these functions to cloud providers, businesses can allocate resources more efficiently and innovate more effectively.
However, the widespread adoption of cloud storage is not without its challenges. Data security and privacy remain significant concerns for individuals and businesses alike. High-profile data breaches and cyber-attacks have raised questions about the safety of storing sensitive information in the cloud. While cloud providers invest heavily in security measures, the responsibility for data protection is shared between the provider and the user. This shared responsibility necessitates a comprehensive understanding of security protocols and a proactive approach to safeguarding data.
Furthermore, as data crosses international borders, varying regulations and compliance requirements come into play. Companies must navigate a complex web of data protection laws, such as the General Data Protection Regulation (GDPR) in Europe, which mandates stringent measures for data privacy. These challenges highlight the need for transparent policies and robust security frameworks to build trust in cloud solutions.
As we look to the future, cloud storage is rife with possibilities. In today's data-driven world, cloud storage is not just a tool but a transformative force poised to shape the future of technology and society in ways we are only beginning to imagine. Navigating this era of rapid change requires collaboration among stakeholders, including businesses, governments, and individuals. By working together, we can harness the full potential of cloud storage while responsibly addressing its challenges, ensuring that this powerful technology continues to drive innovation and growth.
In the rapidly evolving landscape of digital technology, cloud storage has emerged as a cornerstone of innovation, efficiency, and accessibility. It has become an integral part of daily operations and long-term strategies, from individual users to multinational corporations. As we continue to witness this technological transformation, it is essential to understand the profound implications of cloud storage on our personal lives, businesses, and society at large.
The unprecedented accessibility that cloud storage offers is one of its most significant advantages. Unlike traditional storage solutions, cloud storage allows users to access data from anywhere in the world, provided they have an internet connection. This democratisation of information sharing enables seamless collaboration across different time zones and geographic locations, breaking down barriers that once hindered efficient communication and cooperation.
References
Water is a pure substance that carries a huge load during its natural cycle on the earth, in the bio systems, in washing and cleaning, as well as in industrial and agricultural processes. Water gets rid of the load by evaporation powered by the sun, and leaves it in the ponds, lakes and seas. In spite of modern waste water treatment these dumps of the water cycle become worse, and energy and chemicals consumption of the treatment has increased steadily. The waters and eco systems suffer increasing amount of drug and pharmaceutical residues, nutrients, various poisons, many other chemicals, micro plastics, microbe growth and algae, and low oxygen.
The natural water cycle is the largest transportation and climate cooling system in the globe powered by the sun. A lot of evaporation cool has lost by extension of civilization, open area building and construction, and underground sewerage systems. Wastewater is pumped underground thousands of kilometers to concentrated waste water plants and further to waters. The natural water cycle as well as photosynthesis is shrunk significantly.
OxTube water clarification separates the load from the water such a way that the most of it can be removed and recycled. The clarification is hermetic and consists of four seamless phases; (1) separation of dissolved ingredients, (2) molecular activation, (3) clarification reactions, and (4) replacement dissolving of air or other gases. It separates dissolved gases like radon, carbon dioxide, hydrogen sulfide and hydrocarbon, and dissolved solids like iron, manganese, calcium, fluorine and phosphorus. The molecules are activated and clarification reactions happen immediately by suction of clean air, oxygen or ozone. The clarified water is aerated or oxygenated right after the clarification. All this happens within a second or few seconds depending on water volumes to be clarified. Disinfection of 100 percent microbe reduction can be completed and microbe growth eliminated by ozone feed in the tube combined with the clarification. OxTube can be integrated in various water systems like fountains, flotation, hydro power generation, ships, boats and rivers.
Keywords: Wastewater Treatment; Water Clarification; Water Disinfection; Water Recycling; Particle Separation; Pharmaceutics Removal; Radon Removal
References
Cardiovascular diseases (CVD) are a leading global health concern, contributing significantly to mortality worldwide. With an estimated 17.9 million deaths annually, CVD includes conditions like coronary artery disease and cerebrovascular disease. The high incidence of sudden cardiac death (SCD) and myocardial infarction underscores the need for effective emergency interventions. Immediate medical response, including cardiopulmonary resuscitation (CPR) and the use of automated external defibrillators (AEDs), is crucial for improving survival rates. This study systematically evaluates various CPR strategies and technologies, focusing on advancements such as real-time feedback devices, community training programs, and the LUCAS automated chest compression device. By adhering to PRISMA guidelines and using rigorous methodology, the research identifies effective interventions and highlights gaps in current practices. Key findings suggest that community training, rapid response systems, hands-only CPR, and advanced technologies significantly enhance the efficacy of out-of-hospital cardiac arrest management. The study also introduces the DARCP device, which improves airway management and CPR quality. Overall, this research emphasizes the need for ongoing technological and methodological advancements to optimize emergency cardiac care and improve patient outcomes.
Keywords: Cardiovascular Diseases; Cardiopulmonary Resuscitation; Automated External Defibrillators; Sudden Cardiac Death (SCD); Real-Time Feedback Devices; Emergency Medical Response; Out-of-Hospital Cardiac Arrest
References
(The presented research represents a humanoid robot that was developed and implemented by ourself. The robot encompasses sensory capabilities similar to those found in humans, such as vision, touch, and hearing. Through comprehensive research, design, and implementation, the robot was able to accurately mimic human senses. The success of this project sheds light on the potential of humanoid robots in enhancing communication between humans and machines. The development of a robot with human-like sensory abilities opens up new horizons for applications in fields such as healthcare, assistance, and entertainment. Human beings have been forced to directly interact with infectious disease patients and undertake challenging tasks that consume time and effort. Therefore, we created a fully implemented humanoid robot equipped with human-like senses to assist humans in these tasks.
Our project involves a humanoid robot that we have developed entirely, which includes senses similar to humans. For example: it has the ability to recognize people by their faces and call them by their names, interacting with them accordingly. It can also recognize objects and emotions, enabling it to identify various things in our real-life and communicate with humans based on their feelings. It possesses the sense of hearing, allowing it to listen to people and speak to them in the Arabic language, without the need for internet connectivity. It can answer their questions or perform specific actions, such as fetching items for them. It has the ability to move its eyes, head, and all its joints, and it can face people, speak with them, and interact with its hands just like ordinary humans. Because of these capabilities, we can utilize it in medical, educational, military, and industrial fields. For example, we can use it in the field of education as a teacher who delivers lectures, answers students' questions, and creates a complete lecture atmosphere.
We used a Raspberry Pi controller, which serves as a small computer in the robot's head, and programmed it using the Python language to operate with artificial intelligence similar to that found in humans. With the presence of large servo motors, it is capable of moving each joint just like humans. This research serves as a foundation for future advancements in the field of robotics, with the aim of creating robots capable of interacting and functioning alongside humans naturally and intuitively).
Keywords: Exploration Robotics; Humanoid Robot; Python language; Raspberry Pi Controller; Robotics
References
This paper presents the current version of our (Elementary) Mathematical Data Model ((E)MDM), which is based on the naïve theory of sets, relations, and functions, as well as on the first-order predicate calculus with equality. Many real-life examples illustrate its 4 types of sets, 4 types of functions, and 76 types of constraints. This rich panoply of constraints is the main strength of this model, guaranteeing that any data value stored in a database is plausible, which is the highest possible level of syntactical data quality. An (E)MDM example scheme is presented and contrasted with some popular family tree software products.
Keywords: (Elementary) Mathematical Data Model; MatBase; Naïve theory of sets relations and functions; First order predicate calculus with equality; Database design; Modelware
An example of deeper integration of AI in software development tools is Google IDX. Being a cloud-based integrated development environment, it does not require a separate AI-based tool to improve developer productivity. Aside from offering similar benefits as GitHub Copilot and Supermaven, IDX also provides further AI-assisted task automation, such as writing documentation or agent templates, which provide the starting point for a project or process.
AI tools go beyond just assisting engineers in parts of their workflows, but can be used to create components for their web application. One such example is V0 by Vercel. It uses AI to generate React.js components that use shadcn/ui UI components and Tailwind CSS to generate styles. It allows AI, which is “under the hood”, to generate components better, as it creates them based on a well-defined template.
Despite having many advantages, different tools powered by artificial intelligence also come with some drawbacks. The main drawback of general-purpose AI tools is that an engineer has to have some experience in the topic they are researching/prompting for in order to have a completely safe experience. Besides that, the cost is also a factor which must be considered, since many professional tools do have a subscription or token-based payment system, which can grow a lot if an engineer is using multiple tools at the same time.
Artificial intelligence today offers many ways for software engineers to make their work more productive than ever. Whether it be help inside their integrated development environments or outside of them, there are many pathways engineers can take to enhance their development experience. Despite already having many options, it is clear that this is just a beginning of a larger trend that is moving towards bigger inclusion of artificial intelligence in daily workload of software engineers.
In a relatively short period of time, artificial intelligence has become one of the most powerful technologies and an irreplaceable companion to many software engineers. Nowadays, there are multitudes of ways in which artificial intelligence has embedded itself in the software engineering process, giving engineers an edge during different stages of the process.
Due to the wide array of available tools, developers are using both general-purpose and specialized AI tools to enhance the development experience. General-purpose tools, such as ChatGPT, can be used to find information, generate simpler code solutions, or research ideas. On the other side, there are many tools which are specialized for development, which offer more in-depth knowledge and deeper integration with development tools. There are numerous such tools, among which are GitHub Copilot, Supermaven, Google IDX, and Vercel V0.
GitHub Copilot and Supermaven are tools that enhance the development experience within the engineer's preferred development environment (usually Visual Studio Code, Neovim, or JetBrains IDE) by providing automatic code completion, chat interface for queries, research, and more. The main advantage that those tools offer is the benefit of the context of the codebase, which means that engineers have a personalized experience when using such tools. Another positive is that the aforementioned tools adapt the code style to the codebase, using a similar style (or language) of variable names, the way of writing code blocks, and writing appropriate tests.
If we look at the time domain, we cannot understand the situation, but if we look at the frequency domain. We understand the situation. In short, we utilize pattern-matching.
When we listen to the radio, we can distinguish men and women and can understand the emotion of the speaker. This demonstrates how the pattern works.
The pattern approach is Deep Learning. But Deep Learning is static pattern -matching. Our world is dynamic. So, we introduce Recurrent Neural Network (RNN). But RNN assigns weights between nodes in a random way and it is done automatically. We cannot manage the system.
But if we introduce Reservoir Computing (RC), we can manage the output, So we can manage the system as we wish. What is more important in Introducing RC is it enables us to utilize micro technologies.
We can make sensors and actuators extremely small. So, we can make them part of our body. and we can run them simultaneously. In short, it becomes new INSTINCT. The introduction of RC enhances our human capability.
Octopus teaches us the importance of INSTINCT. AlthouHgh its head is big, its brain is small. Its brain capability is that of a dog. Then, why does it have such a large head.? It is because it directly interacts with the outside world with its eight arms and recognize the situation. Therefore, Octopus is known as expert of escape. It can escape from any environment and situation. In fact, it can even escape from a screwed container.
Human, on the other hand, collects body information and sends it to brain, and structure it to knowledge. So, there is time delay. Therefore, knowledge does not work in the world which changes every moment.
In short, Octopus intelligence is Wisdom, while an intelligence is Knowledge. Today, what is needed is Wisdom. Let us make the most of INSTiNCT !
He word "VUCA" is getting wide attention these days. Indeed, out world today is full of Volatility, Uncertainty, Complexity and Ambiguity. But come to think, our daily life is changing every moment and every day is different.
Then, how have we coped with this real world which changes every moment. It is INSTINCT. As the real world is changing every moment and is unpredictable, the tool we have is nothing other than INSTINCT. Then, how does INSTINCT respond. This is to utilize Fourier Transform.
References
Fast-growing digital trends have driven growth in the threat landscape of cyber-attacks, pushing unprecedented burdens on organizations to manage vulnerabilities effectively. This study investigated two years of complex relationships between human expertise and technological solutions in the domain of cybersecurity vulnerability management (VM) for a leading fast-moving consumer goods (FMCG) company operating internationally in multiple countries, leveraging both on-premises and cloud infrastructure. This study introduces the tensions arising from this duality, and an innovative AI-driven scoring methodology designed to streamline the end-to-end vulnerability management process to offer a more dynamic and contextualized risk assessment that the current traditional scoring methods such as the Common Vulnerability Scoring System (CVSS) lacks. Rooted in sociotechnical systems theory (STS), actor-network theory (ANT), and resource-based view (RBV), this research bridges the gap between technological reliance and human interpretative skills, which are two dominant but often disconnected aspects of VM. This paper highlights the benefit of VM that results from a symbiotic relationship between humans and technology, emphasizing how artificial intelligence (AI) and automation can mitigate the limitations of human-centric approaches and how humans can address the technological contextual limitations, resulting in a win-win approach. The findings set the orientation for a nascent stream of academic research on the relationship between humans and AI in vulnerability management and practical applications for scoring vulnerabilities in cybersecurity.
Keywords: Vulnerability management; Artificial intelligence; Automation; Human aspects of security; technology vs human expertise; Vulnerability scoring; CVSS
References
For queuing systems with moving servers, the control policy which means delays of a beginning service is introduced. In the capacity of efficiency index of systems is taken a customer’s average waiting time before service. Although it seems that it is a paradoxical idea to introduce delays of beginning service, it is shown that for some systems it gives a gain in a customer’s average waiting time before service. The class of queuing systems for which it is advisable to introduce delays is described. The form of an optimal function minimizing the efficiency index is found.
It is shown that if the intervals between neighbor services have exponential distribution, then the gain in a customer’s average waiting time before service equals 10% and independent of parameter of exponential distribution. For uniform distribution such gain equals 3.5% and also independent of parameter of uniform distribution. The criterion to define for which systems the gain is greater are given. Some open problems and numerical examples demonstrating theoretical results are given.
Keywords: queues with moving servers; a customer’s average waiting time; delay of beginning service; optimal function
References
This paper examines the United Arab Emirates' Science Technology and Innovation (STI) policy, particularly its impact and application within the educational sector, as the nation transitions from a resource-based to a technology-driven economy. Utilising a vertical methodological approach, the study begins by contextualising the STI policy before comparing it with those of other nations, and then deeply analysing its specific application and implications in education. The findings reveal that while the UAE has successfully integrated STI into its educational framework, thereby progressing towards Vision 2021, several challenges persist. These include the need for a more structured implementation process, an undefined role for teachers in this transformative journey, and the neglect of demand factors within the educational system. Despite these gaps, the policy has aligned the UAE with global competitiveness standards and fostered an innovative environment. The paper concludes that although the STI policy has significantly transformed the educational landscape, driving innovation and technological advancement, it requires further refinement. To achieve deeper and more sustainable impacts, the policy must incorporate a comprehensive implementation framework, enhance teacher capabilities, and address the demand factors of education. By refining these elements, the UAE can better tailor its educational system to meet the objectives of the STI policy and prepare for future challenges, thereby reinforcing its position in the global economic landscape.
Keywords: STI policy; education; technology; innovation; policy evaluation
Numerical modeling of ECC, specifically using the finite element method (FEM), is critical to predict the material’s behavior under various loading conditions such as tension, compression, and shear. The FEM is widely used to simulate these behaviors, allowing researchers to model the material’s microstructural properties and failure criteria under different environmental conditions and loading scenarios [9]. In FEM models, constitutive laws for materials are critical, and for bendable concrete, the use of damage plasticity models is common [10]. These models simulate how the material transitions from elastic to plastic behavior and eventually reaches failure due to crack initiation and propagation. Studies have shown that by adjusting material parameters like fiber content, researchers can simulate how ECC manages crack width and spacing, providing a more resilient structure compared to conventional concrete [11]. In this context, FEM models have been used successfully to match experimental data, verifying the effectiveness of the material’s ductility and energy absorption capabilities [12]. Furthermore, various failure criteria, including maximum principal stress and strain-based models, are implemented to predict when and where the material will fail [13], and damage mechanics procedures as well [14]. Numerical models also include cohesive zone models (CZM) to simulate the bond behavior between the fibers and the cementitious matrix, which is crucial for understanding the interface’s strength and durability in reinforced ECC [15]. These numerical methods provide engineers with reliable tools to design and optimize the use of bendable concrete in practical applications. By refining FEM simulations, researchers can predict material performance more accurately, ensuring the design of safer, more durable structures that can withstand dynamic and extreme loads such as seismic events and explosions.
References
Bendable concrete, also known as Engineered Cementitious Composites (ECC), represents a significant advancement in the construction industry due to its enhanced ductility and flexibility compared to traditional concrete. Unlike conventional concrete, which is brittle and prone to cracking under tension, bendable concrete can withstand significant deformation before failure. This is largely due to the inclusion of micro-scale fibers, typically polymer, steel, or other materials, which help control crack propagation and improve tensile strength [1]. One of the primary innovations of ECC is its ability to form tight, distributed micro-cracks rather than large, localized fractures. These micro-cracks allow the material to bend without losing structural integrity, making it ideal for applications that demand high resilience, such as buildings in earthquake-prone areas, infrastructure exposed to heavy traffic, or structures that need to absorb the energy from explosions or impacts [2]. Researchers have conducted extensive studies to assess ECC’s performance, especially in environments that experience dynamic loading, such as bridges and high-rise buildings [3]. Experimental studies showed that the fiber-matrix interface is critical in ensuring bendability. Research has focused on optimizing fiber volume fraction and matrix composition for different environmental conditions, including hot and dry climate [4]. Experimental investigations have tested ECC under dynamic loading conditions like shocks and earthquakes. The key advantage is its ability to strain harden under tension, which helps to resist crack formation and maintain structural integrity during seismic events or explosions. This is particularly important for improving the safety of infrastructure in earthquake-prone or high-risk areas [5]. Experimental tests have also been conducted to evaluate ECC’s energy absorption and fracture toughness, making it more effective than traditional concrete in withstanding sudden impacts or explosive forces [6]. Studies have examined ECC’s performance in extreme climates, including high temperatures and dry conditions, to ensure long-term durability. Experiments have shown that adjusting the composition (e.g., using supplementary cementitious materials like fly ash) can help mitigate issues like shrinkage or reduced workability [7]. Furthermore, durability testing under freeze-thaw cycles and high-temperature exposure reveals that ECC has superior long-term performance, making it an ideal candidate for infrastructures in extreme environments [8].
References
Delay is a major Quality of Service (QoS) metric in mission critical applications. Some applications run on Mobile Ad-Hoc Network (MANET) set ups which comes with transmission challenges arising from the size of traffic packets and environmental conditions. These challenges cause transmission delays, packet loss and hence a degraded network performance. This study investigated the performance of: Earliest Deadline First (EDF); Low Latency Queueing (LLQ) and Weighted Round Robin (WRR) scheduling algorithms in MANETs.
Firstly, the study investigated the Abhaya pre-emptive EDF scheduler. The study improved and adopted EDF algorithm to the MANETs environment, and formulated the Enhanced Earliest Deadline First-I and II (EEDF-I & EEDF-II) algorithms respectively. The numerical results showed that the EEDF-II model shortened the waiting times of packets of the different queues at various system loads compared with the EEDF-I model.
Secondly, the study adopted and improved the existing model to LLQ algorithm in the M/G/1 queue system. The numerical results revealed that the proposed algorithm performed better than the adopted in transmitting video packets. The study extended further the proposed LLQ algorithm to formulate the Extended Low Latency Queuing algorithm (ELLQ). The numerical results revealed that the video packets experienced the least conditional mean response time/slowdown; followed by voice packets and lastly text packets.
Thirdly, the study enhanced and studied the Existing (EWRR) service strategy; and then proposed an Improved (IWRR) model in the M/G/1 queue system under varying workloads distributions. The numerical results showed that video packets performed poorly compared to voice packets in the EWRR algorithm.
In conclusion, we studied three algorithms namely: EDF, LLQ & WRR, and proposed three novel variants i.e., EEDF-II, ELLQ plus IWRR for MANETs.
.
Quantum error-correcting codes are important and essential for quantum information and quantum computation both in the binary and non-binary cases. In the last few years, a lot of research has been done for finding good quantum codes. As a class of quantum error-correcting codes, the quantum stabilizer codes play an important role in coding theory. As we know, the construction of new quantum stabilizer codes which have good parameters is a difficult problem. Different methods have been proposed by researchers to construct a quantum stabilizer code. A new method of quantum stabilizer code construction is based on symmetric association schemes. By employing this method, quantum stabilizer codes with optimal parameters are obtained. For this purpose, the book “Quantum Computation and Quantum Information” by M. A. Nielsen and I. L. Chuang, which covers most of these topics, can be introduced to those interested.
References
This paper rigorously and concisely defines, in the context of our (Elementary) Mathematical Data Model ((E)MDM), the mathematical concepts of dyadic relation, reflexivity, irreflexivity, symmetry, asymmetry, transitivity, intransitivity, Euclideanity, inEuclideanity, equivalence, acyclicity, connectivity, the properties that relate them, and the corresponding corollaries on the coherence and minimality of sets made of such dyadic relation properties viewed as database constraints. Its main contribution is the pseudocode algorithm used by MatBase, our intelligent database management system prototype based on both (E)MDM, the relational, and the entity-relationship data models, for enforcing dyadic relation constraint sets. We proved that this algorithm guarantees the satisfiability, coherence, and minimality of such sets, while being very fast, solid, complete, and minimal. In the sequel, we also presented the relevant MatBase user interface as well as the tables of its metacatalog used by this algorithm.
Keywords: dyadic relation properties; satisfiability, coherence, and minimality of constraint sets; (Elementary) Mathematical Data Model; MatBase; db and db software application design
Engineering as the Foundation
Engineering is fundamentally about problem-solving and applying scientific principles to create solutions that meet human needs. As noted by the National Academy of Engineering, "Engineering is essential to our health, happiness, and safety as a matter of daily life" [1].
This statement underscores the critical role that engineering plays in developing technologies that improve our quality of life. The discipline is the backbone for technological advancement, from civil engineering projects that create infrastructure to software engineering that develops applications.
The Role of Technology
Technology encompasses the tools and systems developed through engineering to enhance human capabilities. The rapid advancement of technology has transformed communication, transportation, healthcare, and education. For instance, smartphones have revolutionized how we communicate and access information. According to a report by the World Economic Forum, "The Fourth Industrial Revolution is characterized by a range of new technologies blurring the lines between the physical, digital, and biological worlds" [2].
This convergence is evident in various applications, such as smart homes with IoT devices that monitor energy usage and enhance security.
The Emergence of AI Tools
Artificial intelligence represents a significant leap in technological capability. AI tools utilize machine learning algorithms to analyze data, recognize patterns, and make decisions with minimal human intervention. A report by McKinsey Global Institute states that "AI could potentially deliver additional global economic activity of around $13 trillion by 2030". This potential underscores AI's transformative impact on industries such as healthcare, where AI algorithms can assist in diagnosing diseases more accurately than traditional methods.
In manufacturing, AI-driven automation enhances productivity by optimizing supply chains and reducing downtime through predictive maintenance. For example, General Electric employs AI tools to predict equipment failures before they occur, saving millions in operational costs. Such applications highlight how AI tools can streamline processes and improve efficiency.
Challenges and Ethical Considerations
Several challenges arise despite the benefits of integrating engineering, technology, and AI tools. Issues such as data privacy, algorithmic bias, and job displacement due to automation require careful consideration. As highlighted by a report from the Brookings Institution, "AI systems can perpetuate existing biases if they are trained on biased data". This raises ethical questions about fairness and accountability in AI applications.
Moreover, the rapid pace of technological change can lead to societal disruptions. Workers in industries vulnerable to automation may face job loss without adequate retraining opportunities. Addressing these challenges will require collaboration between policymakers, educators, and industry leaders to ensure that technological advancements benefit society.
Conclusion
Significant changes are being driven by the confluence of engineering, technology, and AI technologies in several areas. These developments bring previously unheard-of chances for creativity and effectiveness but also difficulties that need careful consideration. We may fully utilize this transformative triangle to build a better future by navigating these challenges, emphasizing moral issues and societal consequences.
References
Integrating engineering, technology, and artificial intelligence (AI) tools reshapes the landscape of various industries and everyday life. This essay explores how these fields intersect to drive innovation, enhance efficiency, and address complex challenges across different sectors.
References
The Halting Problem, first posited by Alan Turing in 1936, presents a fundamental question in computer science: can there exist a universal algorithm capable of determining whether any given program, when provided with a specific input, will eventually halt or continue to run indefinitely? Turing's groundbreaking proof demonstrated the inherent undecidability of this problem, meaning no single algorithm can resolve the halting question for all possible program-input pairs. This undecidability has profound implications for the limits of computational theory and the boundaries of algorithmic problem-solving. However, the practical necessity of ensuring program termination remains critical across various domains, particularly in developing reliable and secure software systems. In this paper, we propose an innovative and comprehensive framework that synergizes formal methods, symbolic execution, and machine learning to provide a practical approach to analyzing and predicting the halting behavior of programs. Our methodology begins with formal methods, specifically abstract interpretation, to approximate the program's behavior in a mathematically rigorous manner. By mapping concrete program states to an abstract domain, we create an over-approximation of program behavior that facilitates the detection of potential non-termination conditions. This step is crucial in handling the complexity of real-world programs, allowing us to strike a balance between computational feasibility and the precision of analysis. Next, we incorporate symbolic execution, a dynamic analysis technique that uses symbolic values in place of actual inputs to explore multiple execution paths of a program. Symbolic execution generates path conditions, logical constraints representing each possible execution path. These conditions are then solved using advanced Satisfiability Modulo Theories (SMT) solvers to determine their feasibility. By systematically exploring feasible paths, symbolic execution uncovers scenarios that might lead to infinite loops or non-termination, providing a dynamic perspective that complements the static analysis of abstract interpretation. To enhance our analysis further, we integrate machine learning models trained on a diverse dataset of programs with known termination behavior. These models extract features such as loop counts, recursion depths, and branching factors from the program code and use them to predict the likelihood of termination. Machine learning offers a data-driven approach, leveraging patterns and statistical correlations to provide probabilistic predictions about program behavior. This component of our framework adds an additional layer of analysis, using the power of modern computational techniques to guide and refine our predictions. Our integrated approach also includes innovative techniques such as counterexample-guided abstraction refinement (CEGAR) to iteratively improve the accuracy of our abstract models based on counterexamples provided by symbolic execution. Additionally, we employ feature importance analysis to interpret the contributions of different features in our machine learning models, enhancing the transparency and trustworthiness of our predictions. This paper presents a detailed evaluation of our framework through extensive experiments on a variety of programs, demonstrating its effectiveness and scalability. We highlight how our approach can detect non-termination scenarios in complex real- world applications, thereby contributing to the reliability and safety of software systems. Furthermore, we explore the implications of our findings for future research, emphasizing the potential for hybrid analysis techniques and the integration of explainable AI in program analysis. Our work advances the field of program analysis by offering a robust, scalable, and scientifically sound methodology for addressing the practical challenges posed by the Halting Problem. By combining the strengths of formal methods, symbolic execution, and machine learning, we provide a comprehensive solution that not only enhances the accuracy of termination predictions but also sets the stage for future innovations in software verification and automated debugging.
Keywords: Halting Problem; Program Analysis; Formal Methods; Abstract Interpretation; Symbolic Execution; Machine Learning; SMT Solvers; Software Verification; Program Termination; Automated Debugging; Counterexample-Guided Abstraction Refinement; Explainable AI; Feature Importance Analysis; Computational Theory
References
Micro-credentials have emerged as a flexible, personalised approach to skills development, serving a variety of learner and industry needs. These credentials offer opportunities for learners to upskill or reskill in a more focused and accessible manner, while enabling employers to address specific skills gaps efficiently. Despite their rising popularity and potential to transform education and workforce development, significant questions surrounding the quality assurance persist. Issues related to the standardisation, transparency, and transferability of micro-credentials pose challenges for both learners and employers seeking to validate and recognise them across different contexts. This paper examines the critical gaps in the quality assurance of micro-credentials, focusing on key areas such as standardisation, recognition, assessment rigor, and alignment with industry standards. It explores the complexity of integrating micro-credentials into existing educational ecosystems and the need for consistent practices that ensure credibility and comparability. Through an analysis of existing literature, this study highlights the pressing need for robust frameworks and alignment mechanisms that guarantee the quality and value of micro-credentials. Furthermore, it highlights the importance of collaboration between educational institutions, industry partners, and policymakers in building a sustainable infrastructure that ensures the integrity and portability of micro-credentials within the broader educational and employment landscape.
.
The arrival of 5G technology has sparked immense excitement across various industries due to its promise of ultra-fast speeds, low latency, and the ability to connect billions of devices seamlessly. However, with this technological leap come significant security concerns that need careful consideration and mitigation. The unique architecture of 5G networks, combined with the proliferation of connected devices, opens new avenues for cyber threats, espionage, and privacy breaches. This article will explore the major security concerns surrounding 5G technology in detail.
Finance
In the finance sector, AI-driven algorithms are employed for fraud detection, credit scoring, and algorithmic trading. These systems can analyze transactional data in real-time, identifying fraudulent activities far more efficiently than traditional methods.
Transportation
Autonomous vehicles are one of the most exciting applications of AI and ML. Companies like Tesla and Waymo use AI to develop self-driving cars that learn from vast amounts of data to navigate roads and avoid obstacles. Additionally, AI optimizes routes for logistics companies, reducing fuel consumption and improving delivery times.
Customer Service
AI-powered chatbots and virtual assistants have become the first line of customer service for many companies. These systems can handle a large volume of inquiries, providing instant responses and freeing up human agents to tackle more complex issues.
Entertainment
AI and ML are also transforming the entertainment industry. Streaming services like Netflix and Spotify use algorithms to analyze user preferences and recommend movies, shows, and music tailored to individual tastes. Additionally, AI is used in the creation of video games, generating realistic environments and challenging gameplay.
Education
In education, AI and ML provide personalized learning experiences. Intelligent tutoring systems can adapt to the learning pace and style of individual students, offering customized resources and feedback. Moreover, AI can assist in grading and evaluating student performance, allowing teachers to focus more on interactive teaching.
Agriculture
AI and ML are paving the way for smart farming techniques. Drones and sensors collect data on crop health, soil conditions, and weather patterns. This data is then analyzed to optimize planting schedules, irrigation, and pesticide use, leading to higher crop yields and sustainable farming practices.
Retail
In retail, AI enhances the shopping experience through personalized recommendations, inventory management, and dynamic pricing strategies. For instance, Amazon’s recommendation engine uses AI to suggest products based on browsing history and purchase behavior.
AI and ML continue to push the boundaries of what is possible, driving innovation across various sectors and improving efficiencies, accuracy, and outcomes. As these technologies evolve, they promise to bring even more transformative changes to our world.
Artificial Intelligence (AI) and Machine Learning (ML) have transcended the realm of science fiction and have found their way into various facets of our daily lives, revolutionizing industries and creating new paradigms.
Healthcare
In healthcare, AI and ML are used to predict diseases, personalize treatment plans, and even discover new drugs. Algorithms can analyze vast datasets of medical records to identify patterns that human doctors might miss, enabling earlier diagnosis and more effective treatments.
References
The traditional moment distribution method is modified to analyze statically indeterminate beams and frames that consist of prismatic and tapered members. An expression of stiffness and carry over factors for tapered members is derived based on the solution of a second order differential equation for the curvature of a rectangular cross-sectional shape tapered in depth only.
The moment distribution method is based on distributing the applied moment at any joint to the members’ ends that connected to this joint. The distributed moment can be mathematically obtained from multiplying the applied moment by a distribution factor. Meanwhile, the distributed moment is carried over to the far end of the member based on a carry-over factor of 0.5. In the modified method there are two different carry over factors; one of them is less than 0.5 and the other is larger, depending on the tapering ratio. Three different applications on the portal frame with different support conditions and different columns’ shapes are presented with numerical analysis and results.
Keywords: Distribution moment; Stiffness factor; Carry over factor; Prismatic member; Tapered member
References
Recent advancements in research techniques have led to the integration of more comprehensive information from mixed methods of persona development. This study used a user-centered design approach to investigate the emotional support provided to children aged 3–6 during medical visits, which involved quantitative and qualitative methods, including questionnaires and in-depth interviews with the primary caregivers of children in this age group. The questionnaire was structured around four main aspects: (1) basic information; (2) reasons for the children's medical visits and parents' anxiety levels; (3) methods and cognitions in caring for the children's illness; and (4) approaches and cognitions regarding caring for the children's psychological and emotional states. One hundred fourteen valid questionnaire responses were collected, and in-depth interviews were conducted with eight pairs of 3-6-year-old children and their primary caregivers, to understand (1) common illnesses of the child, (2) the methods of handling these illnesses, (3) the medical consultation process and common issues encountered, and (4) the child's personal preferences. The results show that over 75% of the participants are raising more than two children. The common reasons for children's medical visits include preventive healthcare, respiratory diseases, vaccinations, and fever. When children fall ill, parents often experience anxiety, tension, and emotional agitation. When seeking medical assistance, the top two priorities for most parents are alleviating their children's symptoms and understanding the causes of illness. Although more than 60% of parents believe recording their children's conditions and symptoms is essential, less than 30% habitually record their children's health conditions. For medical care, the majority of parents prefer pediatric clinics. Three personas representing typical parent-child interactions during children's illnesses were developed: (1) vaccinations, (2) fever with febrile convulsions, and (3) injuries from falls. The critical aspects of emotional care for children include providing reassurance and guidance, acknowledging their emotions, using comforting tools to distract them when necessary, and delivering regular health education.
Keywords: persona; emotional care; preschool children; medical visit; mixed-methods
References
The menace of breast cancer poses a formidable challenge to global public health, particularly affecting women across diverse regions. Timely identification and precise prognosis are imperative for efficacious treatment and enhanced patient outcomes. Conventional diagnostic methods, such as mammography and biopsy, though widely employed, can be invasive and occasionally yield imprecise results. Within this context, machine learning (ML) algorithms have emerged as a promising avenue for breast cancer prediction. These algorithms demonstrate proficiency in scrutinizing extensive datasets, discerning intricate patterns, and subsequently formulating predictions based on the analyzed information. The research presented in this paper is dedicated to the formulation of a sophisticated predictive model for breast cancer utilizing ML algorithms. The dataset utilized encompasses comprehensive clinical and imaging data from patients diagnosed with breast cancer. Subsequent to the extraction of pertinent features from the dataset, rigorous preprocessing procedures will precede the training and testing phases of the ML models. The primary objective of this study is to identify the most accurate algorithm for predicting breast cancer. A comprehensive evaluation of various ML algorithms, including logistic regression, decision trees, random forests, and neural networks, will be undertaken to assess their efficacy in breast cancer prediction. Logistic regression, a statistical method adept at analyzing datasets with one or more independent variables and a binary outcome variable, will be employed in discerning crucial factors such as age, family history, and prior cancer diagnoses in predicting breast cancer. Decision trees, an alternative ML algorithm for classification tasks, leverage a hierarchical structure to classify data based on a sequence of decisions derived from input features. Random forests, an extension of decision trees, employ multiple trees to enhance model accuracy, each trained on a random subset of the dataset. Neural networks, inspired by the intricate architecture of the human brain, comprise interconnected layers of nodes processing input data to generate predictions. The learning mechanism involves adjusting the weights of inter-node connections based on training data. The evaluation of ML algorithm performance will be based on standard metrics including accuracy, precision, recall, and F1-score. These metrics serve as robust indicators of the model’s effectiveness in accurately predicting breast cancer. The identification of pivotal features contributing to breast cancer prediction within this study is anticipated to yield insights into the potential applications of ML algorithms in this domain, contributing significantly to the development of precise prediction models for breast cancer. In summary, this research endeavor, focusing on the prediction of breast cancer using ML algorithms, holds promise for enhancing both diagnosis and treatment of this debilitating condition. The creation of precise prediction models employing clinical and imaging data can empower healthcare providers to identify individuals at elevated risk promptly and initiate appropriate interventions. The outcomes of this study may play a pivotal role in advancing more effective breast cancer screening programs and ultimately improving patient outcomes.
Keywords: Breast cancer; Machine learning; Predictive model; Clinical data; Diagnosis
References
The primary purpose of E-GAS monitoring is to ensure the functional stability of the electronic controller in relation to vehicle torque. At level 2 of the E-GAS monitoring concept, the calculation of permissible torque typically relies on formula-based models that simplify real vehicle behavior, accompanied by complex calibration to enhance accuracy. However, this approach often fails to adequately account for the diverse driving scenarios encountered by the vehicle. To address this limitation, this study proposes an algorithm for calculating permissible torque using machine learning at level 2 of the E-GAS monitoring concept. The effectiveness of the algorithm is validated through the analysis of real-world vehicle driving data, confirming its practicality and applicability.
Keywords: E-GAS Monitoring Concept; Torque Monitoring; Machine Learning; Electric Vehicle; Functional Safety; Vehicle Control Unit (VCU)