Marknadens största urval
Snabb leverans

Böcker utgivna av MOHAMMED ABDUL SATTAR

Filter
Filter
Sortera efterSortera Populära
  • av Renita J
    486,-

    Cryptography has become an integral part of the globe. The need to secure things has become a necessity as the world leaps towards technology enhancements. Modern security systems use cryptography for secure transactions and communications, to secure personal information and other confidential data, to create trust between different servers etc. Weak cryptography may expose the infrastructure to vulnerabilities. This may cause information leakage and brand destruction. Hence, the latest development in technologies should also sternly focus on how cryptography is employed and managed throughout the innovations. With these novelties, it is very safe to transmit sensitive information since they become unreadable and unmodifiable. The plaintext is in a readable format which is then encrypted to get the cipher text. The cipher text is the encrypted data which is in a non-readable format. The cipher text is then decrypted to get the plaintext back which is in the readable format. The major aspects to be included in a crypto module includes the algorithms, the keys, libraries and the certificates that are being used. The use of cryptographic keys is to protect sensitive information. The length of the keys should be maintained as suggested by the NIST (National Institute of Standards and Technology) and private keys must be kept secret to be effective. The use of insecure keys or disclosing the secret keys makes the crypto algorithm obsolete. Crypto algorithms have the basic mathematical foundation to maintain the confidentiality, integrity and authenticity of sensitive information. It is important to choose reliable, standardized and mathematically secure crypto algorithms to prevent data exposure, data tampering, or repudiation. In the present time, cryptography has become a mandatory source for digital business. The organizations and technologies that provide crypto security should follow the techniques suggested by standard groups such as NIST and ISO (International Organization for Standardization). This leads to crypto agility which is the key to keeping pace with the latest cryptographic compliance requirements, standards, and recommendations that sustain and secure digital business. Cryptography has become an integral part of the globe. The need to secure things has become a necessity as the world leaps towards technology enhancements. Modern security systems use cryptography for secure transactions and communications, to secure personal information and other confidential data, to create trust between different servers etc. Weak cryptography may expose the infrastructure to vulnerabilities. This may cause information leakage and brand destruction. Hence, the latest development in technologies should also sternly focus on how cryptography is employed and managed throughout the innovations. With these novelties, it is very safe to transmit sensitive information since they become unreadable and unmodifiable. The plaintext is in a readable format which is then encrypted to get the cipher text. The cipher text is the encrypted data which is in a non-readable format. The cipher text is then decrypted to get the plaintext back which is in the readable format. The major aspects to be included in a crypto module includes the algorithms, the keys, libraries and the certificates that are being used. The use of cryptographic keys is to protect sensitive information. The length of the keys should be maintained as suggested by the NIST (National Institute of Standards and Technology) and private keys must be kept secret to be effective. The use of insecure keys or disclosing the secret keys makes the crypto algorithm obsolete.

  • av Jane Olive Sharon P
    456,-

    Discrete Mathematics is the analysis of mathematical structures that are fundamentally discrete. It is being dynamically used in the fields of arithmetic and software engineering. Graph theory deals with mathematical models referred to as graphs. It has broad applications in computer science, semantics, sociologies, factual mechanics, hereditary qualities, cheminformatics, bio-informatics and so on. Graph theory has emerged as a vital and effective tool for engineers and scientists, for example, in the area of designing and analyzing algorithms for various problems that range from designing the itineraries for a shipping company to sequencing the human genome in life sciences. Graphs are mathematical structures which consist of a set V of vertices and set E of edges joining certain pairs of vertices. They are used to model pair-wise relations between objects from a certain collection. Vertices, also called nodes, are represented as points in the plane and edges are represented as the line segments connecting them. If more than one edge joins a pair of vertices or an edge has its origin and end as the same vertex, then such a graph is called a pseudograph. Graphs are networks of points connected by lines. Graph theory had its beginnings in recreational puzzles and games, but it has grown into a significant area of mathematical research, with applications in chemistry, operations research, social sciences, and computer science. The history of graph theory may be specifically traced back to 1735, when the Swiss mathematician Leonhard Euler solved the Konigsberg bridge problem. The Konigsberg bridge problem was an old puzzle concerning the possibility of finding a path over every one of the seven bridges that span a forked river flowing past an island but without crossing any bridge twice. Graph theory is applied to simple daily life problems to very complex problems. To make this possible, computer science plays an enormous role. It depends on the theories and proofs in mathematics for developing software applications.

  • av Suja Golden Shiny S
    480,-

    While the Internet of Things (IoT) is already found everywhere around us from smart homes and healthcare to wearables and smart parking, it is expected that IoT will change everyone's lifestyle drastically soon. Industry 4.0 and Society 5.0 are now possible by the combination of Industrial IoT, the internet of systems and cyber physical systems. The immense growth in industry 4.0 and Society 5.0 is driving the need for innovation in wireless sensor networks (WSNs). However since there debut WSNs have always been conceived to be application specific. Software Defined Network (SDN) is one architecture with flexibility dynamics and low management cost it is a potential computing and networking framework that enables global network control by separating the network intelligence from the data layers

  • av Geetha A
    470,-

    The human eye is a sensing organ for vision. It is well-constructed to gather the information about the surrounding environment. The retina is the light-sensitive tissue located in the back of the eye. It accumulates light focused by the lens, turns it into neural impulses, and transfers these signals to the optic nerve. The optic nerve transports the signal to the brain and aids visual processing. The fundus camera supports capturing the eye's interior surface, including the retina, vasculature, Optic Nerve Head (ONH), and posterior pole. It aids in interpreting retinal characteristics and is used to screen the patients with sight-threatening retinal disorders. According to the World Health Organization (WHO), 43 million people are blind, 295 million have moderate to severe visual impairment, and 258 million have mild vision impairment. Many studies attest that Glaucoma is the prominent cause of blindness, accounting for around 3.12 million cases worldwide, and the leading cause of visual disability. The Glaucoma sufferers prediction rises to roughly 111.8 million in 2040 and 117 million in 2050 due to the constant increase in population. Glaucoma is a degenerative eye disease that affects the vision peripherally and gradually leads to blindness due to increased Intra Ocular Pressure (IOP) and changes in the retinal structure like Optic Disc (OD) and Optic Cup (OC). Early diagnosis and quick treatment for Glaucoma are critical for preserving visual function and preventing permanent vision loss. Glaucoma comprises a group of ocular neuropathies characterized by gradual loss of retinal ganglion cells, optic nerve degeneration, and vision loss. Additionally, Glaucoma patients experience a severe reduction in their visual field, which is asymptomatic until the disease progresses to an advanced stage. Clinical trials were conducted to assess the disease's progression stages, ranging from mild to advanced, using numerous risk factors and anatomic features. Diagnosis of disease necessitates a thorough examination, several tests, and access to various diagnostic modalities. This complex, subjective detection method is hampered by various challenges, including a lack of patient awareness of the disease, a lack of healthcare facilities, and a paucity of trained professionals, particularly in remote locations.

  • av Abhishek Kr Singh
    456,-

    Mathematics can be distinguished among all the sciences due to its precise language and clear rules of reasoning. The notion of proof lies at the heart of mathematics. Typically, proof of any mathematical statement is a logical argument, that can convince anyone of the correctness of the statement. Starting with a set of assumptions a mathematical proof tries to discover new facts using a sequence of logical steps. These logical steps must correspond to the rules of reasoning which are considered correct in mathematics. Ideally, proof should contain all the necessary information so that its veri¿cation becomes a purely mechanical job. However, the contemporary practice of writing mathematical proofs is only an approximation to this ideal, where the task of a reviewer is to use his intelligence to judge whether the proof could be expressed in a way that conforms to the valid rules of reasoning. A reviewer very often comes across inferential gaps, imprecise de¿nitions, and unstated background assumptions. In such circumstances, it is di¿cult to say whether a proof is correct or not. Even if the statement turns out to be true, judging it to be so could take a long time. Mathematical proofs are becoming more and more complex and the length of unusually large proofs has also increased with time. If a proof is short one can check it manually. But if proof is deep and already ¿lls hundreds of journal pages very few people may have the expertise to go through it. The correctness concern of such complex proofs is further enhanced by the fact that some of these proofs rely on extensive computation.

  • av Pankajini Samal
    456,-

    Rice being the most important cereal of not only in India but in most of the Asian and African countries and also the most important cereal second to wheat only. In India the average productivity of rice is approximately 3.4 t/h which is much below than its highest productivity. The major constraints of rice production are different stresses which include both biotic and abiotic stresses. Out of the biotic factors different diseases caused by fungi, bacteria and viruses are major. The rice suffers from different fungal diseases like blast, sheath blight, brown spot, sheath rot, false smut, seedling blight, bakanae etc. Sheath blight (ShB) of rice has emerged as a serious threat for rice cultivation for the Indian subcontinent. The introduction of semi dwarf high yielding varieties of rice has brought evolution in rice production in India but also it has invited several problems along with it like some minor diseases have emerged as a major one. The sheath blight (ShB), caused by Rhizoctonia solani (Kuhn) (perfect stage Thanatephorus cucumeris), which was once a minor disease has turned to be a serious threat to rice cultivation in mainly tropical countries. The estimated yield loss due to this disease is 5.2 to 50 percent depending upon the environmental conditions. The disease was first identified in Japan and subsequently spread to various parts of the world. The major problem of the disease is that the causal organism is polyphagous in nature and formation of new anastomosis group in nature leading to new virulent strains. There are many reasons behind the difficulties in controlling this disease including wide host range of the pathogen and its capability to survive for longer period of time as thick walled sclerotia in the soil as well as its high genetic variability. The pathogen can be effectively controlled by fungicide only which shows negative impact on environment.

  • av Brij Bhushan
    456,-

    Since its appearance on earth, man has exploited and modified the environment to his advantage in many ways. One of the factors that affect the degradation of the environment is population growth. Due to exponential growth of population in recent years, there is great demand for construction and thus increasing pressure for use of natural resources causing their acute shortage. Cement is one of the main materials used by construction industries in large quantities. During manufacturing process of cement, significant emission of carbon- dioxide (CO2) is generated. Production of one ton of cement emits approximately one ton of CO2. Similarly, there is huge occurrence of environmental degradation due to excessive use of topsoil in brick manufacturing. Traditionally soil, stone aggregates, sand, bitumen, cement etc. are used for constructional activities. Concrete is a blend of cement, sand, coarse aggregate and water. The key factor that adds value to concrete is its design capability to withstand harshest environments significant role. Due to excessive extraction and consumption of natural materials in nature, their quantities are declining very fast. Moreover, cost of extraction of good quality of natural material is increasing day by day. In view of increasing demand of these natural materials, the scientists are searching alternative materials for construction, and industrial waste materials is one such category. If such waste materials can be suitably utilized in constructional activities, the problems associated with its unscientific disposal and consequently generation of pollution can be partly reduced. It has now become a global concern, to find a social, techno-economic, environmental-friendly solution to preserve a cleaner and pollution free environment. In recent years, the use of solid waste is the challenge for the civil and environmental engineers to utilize environment friendly supplementary cementitious materials by economically developed methods causing least possible environmental degradation. Some of the successfully tested and used industrial wastes are crumb rubber, blast furnace slag, and fly ash etc. Use of such industrial waste materials is not only leading to potential savings in natural resources and energy but also reduction in impact of CO2 emission which otherwise would have been used as landfill and might require a waste management program. Environmental conservation is not only an undeniable industrial responsibility but stringent environmental laws and market competitiveness has demanded effectual and substantial actions from industry to preserve the environment.

  • av Madhura Chakraborty
    410,-

    Myogenesis is the formation of adult skeletal muscle from muscle precursor cells. Muscle tissues make up the largest body tissues in an organism. Skeletal muscle is essential for locomotion, and movement and is voluntarily controlled by the organism, unlike cardiac and smooth muscle. It has a highly organized cytoskeletal system. One of the distinct characteristics of skeletal muscle is the striation contributed by the repeating unit of actin and myosin, together they form a single myofiber. This repeating unit of actin and myosin, known as sarcomere, establishes the skeletal muscle unit. If we investigate the developmental origin of skeletal muscle, it starts from the paraxial mesoderm. Paraxial mesoderm is a tissue formed in blastopore or primitive streak during the process of gastrulation. During embryo axis elongation, at the posterior tip of the embryo paraxial mesoderm forms presomitic mesoderm. The anterior part of the transient presomitic mesoderm structure is committed to form somite. It is the somite from where myogenesis starts. Initiated with skeletal myoblast cells, myogenesis involves proliferation and differentiation which ultimately leads to the formation of multinucleated myofibers by the fusion of mononucleated myoblasts and myocytes. A vast number of transcription factors control the process of myogenesis. These factors are known as myogenic regulatory factors (MRFs) and are member of basic helix loop helix family transcription factors. Activation of these MRFs is the hallmark of the initiation of myogenesis. Here we discussed different factors that contributed to the various stages of myogenesis starting from the embryonic stage to tissue formation. In general, muscle development has two major phases, one is embryonic developmental phase from where skeletal muscle tissue develops and another is muscle regeneration which is more prominent in adult muscle tissue after facing injury. Both development and regeneration need regulation by MRFs. The analysis of cell fusion and the mechanism behind it started long back in different systems. Starting from the embryonic stages we have found cell fusion as a normal physiological phenomenon in neural cell, yeast reproduction, exocytosis, macrophage, and muscle cells. Here, we addressed the fusion of mouse myoblasts where two or more myoblast cells fused to form multinucleated myotubes. Apart from the vertebrate system, mechanism of myoblast fusion has also been established in Drosophila and zebrafish systems. Despite the differences in gene expression, the fundamental mechanism of muscle fusion remains the same across systems.

  • av K. Riyazuddin
    440,-

    Revolutionary changes have been witnessed in the last decade in the field of wireless communications. 2G GSM based communication systems supporting data rates of 10 Kbps, are present and must see extinction. Number of enhanced wireless technologies have been industrialized in the past decade to allow broadband wireless access with data rates greater than 100 Mbps. These enhancements have successively led to the improvement and/or development of 3G and 4G wireless technologies like Worldwide Interoperability for Microwave Access (WiMAX), High Speed Downlink Packet Access (HSDPA), and Long-Term Evolution (LTE). This became possible by discovery of some wireless based technologies such as Orthogonal Frequency Division Multiplexing (OFDM), Code Division Multiple Access (CDMA), etc. These techniques form the basis of understanding in the realm of 3G/4G wireless communication systems. Wireless communications is the fastest budding section of the information communications industry. As such, it has bagged the public imagination and attention of the media. There is a tremendous growth in Cellular systems in the last few years. Certainly, the old wire line systems are supplanted by cellular phones which have become a dire business tool and also a part of the daily life in most of the developed countries. The future of wireless systems seems to be very bright because of an explosive growth in it united with the production of laptop and palmtop computers, both as stand-alone systems and being a part of a very big networking infrastructure. However, many technical challenges to be met while designing robust wireless networks which help in supporting some industrialized applications. OFDM has gained importance in recent years and it is the technique selected for wireless systems, such as Long-Term Evolution (LTE) for 4G communications systems, Wireless Local Area Networks (WLAN) or WiMAX. For this reason, OFDM systems have been under study in order to develop more accurate mobile stations positioning, both in outdoors and indoors environment. Nevertheless, OFDM systems require high timing synchronization accuracy in order to be able to receive the signal correctly, which makes timing synchronization estimation a key issue in OFDM receivers. Propagation is especially complicated over wireless channels, where the presence of multipath propagation, high level of interference signals or the obstruction of Line Of Sight (LOS) path make timing estimation even more difficult in indoor environment. The problem of multipath fading make timing estimation tough in outdoor environment also.

  • av Surinder Kumar
    480,-

    Concrete, the world's most commonly used construction material, is a heterogeneous structure made up of a variety of easily available basic building materials such as cement, coarse & fine aggregate, water, and, depending on the application, admixtures or other additives. When these ingredients are combined, a gel mass is formed that can be conveniently moulded into any shape. When the concrete is properly cured, it forms a matrix that holds the other materials together to create a solid stone-like substance known as concrete. Because of its resilience, rigidity, longevity, mouldability, performance, and economy, concrete is a flexible, durable, and long-lasting material used extensively in the building industry. For centuries, humans have used concrete in their ground breaking architectural feats. Cement is one of the most important glueing ingredients used by the building industry, mostly for concrete in large quantities, and its manufacture results in substantial carbon dioxide emissions; one tonne of cement generates approximately one tonne of carbon dioxide. There are two basic types of cement manufacturing processes, as well as a variety of kiln types. Depending on the water content of the raw material feedstock, these are referred to as "wet" or "dry." The wet process requires more energy to evaporate the 30 percent plus slurry water before heating the raw materials to the necessary calcination temperature. Due to shifts in habits, preferences, and society, the need for a sustainable alternative to traditional bricks is increasingly increasing, not just in metro cities but also in rural and urban areas. However, incorporating these additional materials into the building industry is extremely difficult. The technical properties of such materials containing such manufacturing wastes would necessitate a significant amount of ana. The aim is to use waste from the toothpaste industry as a substitute cementitious-material in concrete, partially replacing cement and sand, and as an alternative to virgin/mineral materials/natural resources in bricks, i.e. replacing clay.

  • av Debashree Dalai
    490,-

    Among the different economically important plants, cereals contributed immensely to the progress of human civilization. Rice (Oryza sativa L.) is one such cereal crop species, which belongs to the order Poales of Poaceae (Gramineae) family and is grown in diverse altitudes and ecosystems around the world. The ever-increasing human population along with the reduction of cultivable lands is posing challenges to food security which is further compounded by changes in climate. To address those issues, besides policy interventions, the solution also partly lies in searching and further utilizing the genes or alleles which may have been left out during domestication followed by loss of variability in modem plant breeding especially in post green revolution era. The wild Oryza genotypes are considered as important repository for those kinds of untapped genes or alleles. More number of wild rice genome sequences are expected to become available soon. Although new generation markers like SNPs are now available, still there is a long way to go for their widespread utilization in the case of wild rice. On the contrary, genome wide STMS markers resources, when available, can be readily used by all laboratories for pre-breeding with wild rice. The current research aims at utilizing the whole genome sequences ofdifferent wild rice species to develop a conserved set ofcommon genome­ wide cross transferable STMS markers for different species of 0ryza; so that with minimum resources, researchers can carry out targeted breeding activities involving a large number of species, directly or indirectly. The marker resources can also be used for the characterization of germplasm and natural populations of wild rice besides the development of populations with genome-wide distributed or targeted introgression in the background ofcultivated rice which will include both Asian and African rice. These populations can support large-scale mining and utilization of genes or alleles of wild species lost or untapped during domestication.

  • av Choudavaram Nagaraju
    466,-

    Wireless communications is expected to provide users with wireless multimedia services like high speed internet access, wireless television and mobile computing. These services are providing the communication technology with high data rates, higher carrier frequencies and mobility which enables reliable transmissions over mobile radio channels gaining much growth demand. OFDM system will be effective means of communication if this orthogonality of carrier signals are maintained properly. Modulation is the process of mapping the data onto the carrier signal amplitude or phase. Transmitting a baseband signal over a long distance is highly impossible due to loss of signal strength. MIMO systems can get better the capacity by only using few minimum number of transmit or receive antennas when compared to single input single output systems in flat fading channels. Fourth generation MIMO- OFDM systems has capacity, exposure and dependability higher over multipath fading channels.

  • av Harish Gujjar
    456,-

    Computers with their inherent ability to store large amount of information have found vast applications in the field of banking, e-commerce, agriculture science, decision support systems, expert systems, image processing etc. In science and Engineering there is a vast scope of applications in Digital image processing in the years to come. There is every possibility of developing a final machine-like human beings where image processing plays an important role. Satellites of remote sensing through spacecrafts, store of images for the purpose of business in banks, processing of images in medicals, manmade machines, acoustic, sonar, radar are some of the list of applications where "Digital Image Processing" DIP is anchored. In predicting the environment, in searching the resources which are inside the earth, geographical drawings, crop predictions, growth of population, forecasting the weather, etc are carried out by images. "Mars Orbit Mission" MOM, Images are taken in the deep space, in television broadcasting, audio conferencing, video conferencing, duplicating the documents in case of office automation, security systems in banks, wireless military communication are some of the application of images in space. There are such large areas of application where image processing plays a vital role. In medical applications, X-rays of chest are processed, Cineangio grams, transaxial tomography projection of images, and other images obtained from medical fields that occur in ultrasonic scanning, , nuclear, radiology, magnetic resonance respectively are being carried out. The screening and monitoring of these diseases in patients are done and also detection of trauma is also carried out. Various systems in defense recognizes their targets by using Radar and sonar. There is large listing of applications such as vision of the robot in automation industry, creating a cartoon and in industries of textiles for fashion. The term "Digital Image processing" (DIP) is processing a 2D image in a digital computer. A large and finite number of bits arranged in a particular order in real or complex scenario are called a digital image. An algorithmic and mathematical tool employed in digital image manipulation is called "Digital Image processing" (DIP).The process of identifying the bulk grain depending upon their type is called classification and the presence or absence of foreign body is detected are classified as impure or pure. Classification of bulk grain and detection of foreign body in it is the major challenge. This is done by building a appropriate neural network prototype for the classification and recognition of the images of grains with different features with respect to their types and based on foreign body in terms of pure/impure using the feature sets of three alternate sets of shades, appearance, combination of both shades and appearance respectively and detecting of foreign body and type of impure sample. Digital camera is used to get the images samples of different grains, and image processing techniques are used in separation of features such as color and texture. A classifier is used based on BPNN for the purpose of training. The purity and impurity of the grains is trailed by using the network thus developed. A technique of segmentation and thresholding is used for the impure images to identify the foreign body and category.

  • av F. Leena Vinmalar
    456,-

    Information mining targets finding information on the fundamental information. At the point when the tremendous measure of information is put away in documents, data sets and other stores, growing strong means for analysis is progressively significant what's more, understanding of such information and for the extraction of intriguing information that could help in independent direction. The primary target of the information mining process is forecast. In the clinical field, by and large, and Cancer illness field specifically, huge measure of clinical information is being produced. Clinical demonstrative information is extremely helpful for preventive considerations. Cellular breakdown in the lungs is one of the gathering of Cancer sicknesses, which influences the Lung and partner organs. It is one of the main sources of death in the cutting-edge world. Treatment for this illness and the endurance pace of the patients by and large rely upon the stage at which sickness has been analyzed. Consequently, early detection of the illness is essential in the recuperation cycle of patients. Information mining could assume a significant part in the early recognition of this infection in light of the 'a priori' information. The course of information mining by and large comprises of three phases, to be specific, starting investigation, model structure and organization. Mortal body is made from countless cells. Cells are an abecedarian unit of life. Our squanders, nose, insight, and every organ arrange a cautious size by growing the volume of cells, yet they don't create past a specific cut off. The most widely recognized approach to extending the volume of cells is called cell expansion or cell multiplication. Cell expansion is an incredibly coordinated trade; that is the explanation our nose cutlet and vivid organs have a standard size. Unbridled expansion of cells achieved a social affair of a mass of the telephones called a malignant growth. Since endless units are progressed in disease advancement; thus, every development gives a complicated game plan of data. Disease is an objection where a portion of the body's cells develop wildly and spread to other passage of the body. Malignant growth can begin almost any place in the human body, which is comprised of trillions of cells. Ordinarily, mortal cells develop and increase (through an interaction called cell division) to shape new cells as the body needs them. At the point when cells become old or come harmed, they kick the bucket, and new cells have their spot. Incidentally this precise interaction separates, and unusual or harmed cells develop and increase when they shouldn't. These cells might frame excrescences, which are pieces of towel. Excrescences can be destructive or not carcinogenic.

  • av Jitender
    440,-

    Web Service is a method used by electronic devices to communicate with one another over a network. It is a software feature constantly available at a network address over the Web. These are the web-based applications which are self-controlled, self-descriptive, module-oriented programs that can be distributed, found, and used online. The Web services carry out a wide range of tasks from straightforward requests to intricate business procedures. When they are employed, other software programs and web services can find them and use them from any location and at any time. The World Wide Web Consortium (W3C) has provided the definitions and web standards for web services. W3C has defined web services as, the software component that is designed to support machine-to-machine interoperability, and interactive as well over a network. Quality can be functional or non-functional attributes of web services. If quality is measured according to functionality, then quality is functional attribute otherwise non- functional. The quality that signifies non-functional parameters like total time taken for service to accomplish, the cost of invoking, availability, security features, execution time, etc. The quality attributes of a web services are mainly related to four domains. There may be several vendors of web services, therefore finding and selecting the best and easily accessible web service is a tedious task. The finding and selection, allow organizations to form alliances to provide better services and make available all the things at a single point for the customers. There are numerous services available on the cloud, but every service has restricted use. A single provider is no longer adequate to fulfil the entire user's requests; therefore, the regular services must be combined through service composition to perform a particular task. Hence, the thought of composite services is beginning to be utilized as an assortment of services combined to accomplish a client's request. All in all, from a client's point of view, this composition keeps on being considered to accomplish the task, even though the composition is made from a set of web services.It is the central issue that needs to be addressed during the communication of web services. Threats like message alteration, loss of confidentiality, denial of services, authorization, and man-in-the-middle attacks may affect webservices security. The performance of web service applications depends on service delivery strategies. Performance can be analyzed the perspective of the service user and the service process. Service providers are essentially considered to provide some elementary qualities of services. The elementary requirements, in this case, are speed, readiness, security, and reliability.

  • av Anuj Rani
    440,-

    The fast-paced technological developments have rendered the role of digital images very common in the present scenario. Not only the technology is used to savour and store our memories of events, gatherings, news and other memorable moments, but also used digital images and digital media as a proof of crime, accident, and other such events for justice and legal actions. This comes from the most fundamental concept of "we trust what we see". However, this fact is also abused in terms of photo forgeries. With digital computers and image-editing software, it's simpler to change digital images. This includes fabricated digital photos that are presented as evidence in court or are used to recreate a crime scene. Authenticating such evidence is crucial in such situations in order to fairly resolve the case. For example, according to Wall Street Journal, approximately 10% photographs published in the USA are digitally reformed. Even original scientific reports are also been altered to modify the original data. Sometimes forgeries may not be detected by using conventional image verification methods, such as, naked eyes, histogram, comparison of pixel values etc.. Hence, there is a requirement of a more reliable tool to identify a forgery more accurately. These tools are usually based on physical and mechanical properties of digital images and sensors, respectively, to detect any abnormality in an image. Furthermore, image manipulation tools evolve with latest technical developments, and hence, these forgery detection methods require to be more robust and more reliable. Although digital image forgery detection and security of the information in a digital image (steganography) may look like the same process, however the two are different in nature. While the first one deals with identification of image manipulation without knowing any changes in base images, whereas the second one requires hiding information in base images to identify any changes happening to the images later on.

  • av Pooja Mudgil
    496,-

    With the evolution of human computing, a lot of data gets generated every day and due to the humongous variety of the data, it becomes difficult to organize them in such a manner that it provides significant information regarding a specific context. To retrieve information from a collected set of data, an automated system model is required that holds the information regarding the contexts that are evaluated from the dataset. The automated system can be termed as data processing architecture. Contextual sense provides significance to the seeker from any data processing architecture. A data processing architecture is made up of two elements namely the dataset itself and the processing rules. For example, consider a sense of "hungry" and is defined by the total number of chapattis a human eats in a meal. So the rule set could be, "If the object consumes less than or equal to 5 chapattis Outcome is "hungry" else "not hungry". The rules are dependent upon the type of membership function values that are provided to the object. As briefed earlier, Context is the relevant information specific for a particular user. Context Mining is the process to extract the information with respect to a particular context. context is the origin of the demanded information and due to increasing data complexity in terms of volume, variety, and modularity, it becomes almost impossible to perform the contextual analysis manually. Hence, an automated system is required that analyses the asked content based on the stored content via a rule base architecture. Due to the high computation time of rule base architecture, propagation-based rule mining is now used in modern-day computation. The users generate their requests through an application layer. The application layer passes the request to its concerned forums via the internet. The forums are connecting to data processing centers that could be cloud data centers in this modern time frame. The processing center or the data center has a service manager that analyses the request from the user based on the context that is demanded by the user query. The context evaluation is done with the help of a trained repository and a file log is maintained to write new entry values in the system . The entire process can be termed as context mining.

  • av G. Sumathi
    456,-

    Over the past few decades, programmable logic devices (PLD) such as complex PLDs (CPLD) and field programmable gate arrays (FPGA) are extensively used as the basic building modules in most digital systems due to their robust features such as high density, field re- programmability and faster time-to-market. In addition, usage of PLDs in design reduces discrete integrated circuits (IC) population and associated interconnections on printed circuit board. This, in turn, increases the reliability of PLD-based systems. However, when features such as unit cost, speed, power are considered, application specific integrated circuits (ASIC) are most suitable devices. They also address the problem of fast obsolescence associated with PLDs. Hence, it is clearly evident that electronic systems have proliferated over the past few decades to the point that most aspects of daily life are aided or affected by the automation, control, monitoring, or computational power provided by ICs. A typical PLD design cycle includes programming using hardware description language (HDL), synthesis (netlist generation), simulation, mapping to technology, place and route (PAR), generation of configuration bitstream and finally programming the target device. In general, ASICs follow the same design flow as PLDs till synthesis by converting the target design using basic digital components. Further, it has various stages such as layout formation using standard cell library, mask generation, chip fabrication and package with post-silicon testing. Together with featured advantages of PLD and ASIC based digital designs, many security concerns have arisen; especially, the ability to trust these ICs to perform their specified operation (and only their specified operation) has always been a security concern and has recently become a more active topic of research. The increased deployment of such devices in safety critical applications or sensitive areas, such as nuclear power plant, space, military, health care, treasury and border control has also heightened the need to develop the secure and reliable very large-scale integration (VLSI) designs that ensures the design and data security. The goal of this thesis is to investigate the potential hardware security threats in VLSI device based safety critical applications, in particular, to identify key areas of improvement in hardware security and to suggest solutions for the same with their associated overhead.

  • av Anju Gera
    440,-

    The primary means of communication in today's digital world is the internet, whose characteristics are secrecy, privacy, confidentiality, and authentication. Among the most common methods for securely sending and receiving data are steganography and cryptography. Both of these techniques are fundamental components of information security, even though they work differently. Security of critical information and privacy, as well as the security of individual user transactions over open networks has become not only a necessity but also something that has grown more and more significant. The scope of applications of information and multimedia security research has expanded dramatically in recent years. Confidentiality, integrity, and authenticity are the goals of information security. Since not all information is of equal importance and is likely to be attacked by unauthorized parties, security requirements and procedures must be tailored accordingly. It is necessary to follow highly secure procedures and assign different levels of priority to authorized parties for highly secret data, such as in the case of government, military, and banking. However, security is primarily concerned with preventing the unauthorized use of resources in academic and scientific applications. Steganography hides secret data from third parties, while cryptography prevents outsiders from reading the information. As far as highly secret information is concerned, the government, military, and banking sectors require extremely secure procedures, and they assign different levels of priority to authorized parties according to their level of responsibility. Security is primarily concerned with preventing unauthorized access to resources in academic and scientific environments. Steganography is a method that aims to achieve resilience, transparency, and conceal ability. Embedding is a method of enclosing a message with a stego key that is the same for transmitter and receiver. This output gives a stego sound. At their receiver side, the inserted text is recovered from the cover audio using the stego key. Different steganography techniques utilize images, text, and audio, video, and network protocols. In contrast to the Human Auditory System, which is more sensitive to distortions in audio cover files, the Human Visual System is less sensitive to image alteration. As a result of this, it is considered as a source of motivation for researchers to develop new methods of audio steganography. Steganography techniques are primarily concerned with image steganography schemes, rather than audio steganography schemes.

  • av Rajesh Patel
    466,-

    Cognitive science mainly includes the interdisciplinary scientific study of the mind, including approaches from a wide variety of fields which can be broadly classified as follows:Philosophical perspective: It is one of the oldest disciplines under cognitive science related with a way of thinking about something based on experiments or reasoning. Reasoning involves reaching the fact based on the rules of logic or drawing a conclusion based on similarities among the many observations.Psychological perspective: It deals with the study of mental phenomena to understand both mind and behaviour.Neuroscience perspective: It includes studying brain anatomy and relating it with the cognitive processes in terms of underlying brain mechanisms.Linguistics perspective: It concern with the ability of the brain to understand the complex process related to language.Artificial intelligence perspective: It is toward developing approaches that can mimic the human brain. These developments lead to such programs, which can even perform complex operations. Human memory can be considered as the human brain's capacity to encode, store and recall the most relevant and valuable information out of a continuous stream of sensory perceptions, and experiences derived by interaction with the outside world with a view to use this stored information for analysis of future events and for acquisition of skills based on work and experiences. Human memory may be classified mainly into two broad categories:i) Short-term memory: This refers to the working memory, dealing with storage and recall of information over a relatively short period of time. Working memory is considered to have only a limited capacity, allowing temporary storage of the information and its quick retrieval whenever required for performing the cognitive tasks.ii) Long-term memory: This refers to the storage of information on a more permanent basis, or over a relatively long period of time, which can be retrieved whenever required, either consciously or unconsciously. Information received from the sensory organs and that received during interaction with the outside world is processed initially at the short-term memory storehouse, and depending upon the type of information and perceived requirements, selected information may be transferred to long-term memory for permanent storage. The processing of information in short-term memory involves important cognitive tasks such as reasoning, learning, and understanding. A cognitive task may be defined as a task that involves one or more aspects such as representation of information and knowledge, thought processes relating to these representations, analysis of information leading to development of strategies to achieve the pre-set goals etc. and may require mental processes such as attention, memory, judgement, decision-making etc. The complexity of a cognitive task depends on the load exerted on the working memory during its execution.

  • av Arpit Patel
    320,-

    A space-borne instrument is an instrument has to survive very hostile space environment. The cost of launch is determined by the payload's weight. In that regard, it is necessary to have the least amount of mass, power, and size instrument onboard in order to complete tasks with the best possible performance. Any instrument onboard a spacecraft is likely to be exposed to harsh radiation, extreme temperature swings, and high vacuum conditions. The instrument should also be able to withstand launch vehicle vibration. At various levels of the instrument design, such as selecting components/devices with space qualification, desired radiation testing, design, and packaging of the instrument, care must be taken for the above-mentioned aspects. Instruments built with Commercial-Off-The-Shelf (COTS) components will not withstand the harsh space environment and satellite launch loads. For these reasons, "space instruments are custom-designed one-of-a-kind instruments, and the construction of such a one-of-a-kind instrument is dependent on the mission and the instrument configuration required for the scientific application". Radiation detectors are widely used for measuring radiation emitted by various space objects in the X-ray, Gamma-ray, or high-energy particle regions of the spectrum. Ionizing and non-ionizing radiation are the two main types of radiation. Non-ionizing radiation from ultraviolet is less energetic. The atoms and molecules that interact with UV light particles receive energy from them but do remove their electrons. There are several types of ionizing radiation, including galactic cosmic radiation, trapped radiation, and solar energetic particles. The Galactic cosmic radiation is emitted as massive clouds of high-energy charged particles believed to be emitted by supernovas. The earth's magnetic field is strong enough to catch the charge particles and, in the field, these particles travel in a spiral pattern. Solar particle events involve the Sun releasing energetic solar particles. Sudden, powerful storms may develop as a result of this. Radiation detector instruments usually have a similar kind of readout method which contains CSPA, Shaping amplifier and pulse height analyzer. A suitable pulse height analyzer is required based on instrument specifications such as energy resolution, count rate, mass, etc. There are currently no commercially available back-end electronics to read these many channels. The reading approaches that are currently available demand more mass, power, and processing. The development of a new pulse height analysis technique for spaceborne instruments is required in this direction. The technique should provide comparable or better performance while using less instrument mass, power, and size.

  • av Darshan Lal Meena
    470,-

    The two largest attractions of wireless communication have been mobility and ease of deployment - laying cables is not only laborious and time consuming, but their maintenance is equally bothersome. Wireless communication today surrounds us in many colors and flavors, each with its unique frequency band, coverage, and range of applications. It has matured to a large extent, and standards have evolved for Personal Area Networks, Local Area Networks as well as Broadband Wireless Access. In any but the most trivial networks, some mechanism is required for routing the packets from the source to the final destinations. This includes the discovery and maintenance of routes along with associated costs. In what is called an 'infrastructure based' wireless network, the job of routing is assigned to dedicated nodes called access points (AP). Configurations of the APs are much less dynamic than their, possibly mobile, endpoint nodes. APs are like base stations which keep track of nodes' associations/disassociations, authentication, etc. and control the traffic flow between their clients as well as between fellow APs. The AP may also be connected to the Internet, thereby providing Internet connectivity to its clients. A promising and very attractive class of wireless networks that has emerged is based on an Ad Hoc topology; these networks are known as Wireless Ad Hoc Networks. The term wireless network implies a computer network in which the links of the communication are wireless. The term Ad Hoc comes from the fact that there is no stable infrastructure for packets forwarding/ routing. The entire life-cycle of Ad Hoc networks is categorized into first, second, and third generation Ad Hoc network systems. Here systems of ad-hoc networks are considered the third generation. Wireless Ad Hoc network first generation emerged in 1972. At that time, they were known as Packet Radio Networks (PRNET). In grouping with ALOHA and CSMA (Carrier Sense Multiple Access), advance for medium access control and a type of distance-vector routing, PRNET were used as a demo purpose for different networking capabilities in a contest environment. Second generation of Ad hoc networks emerged in 1980s, the existing network was then new implemented and improved as the part of a Survivable Adaptive Radio Networks (SURAN) program. Without infrastructure this provided a packet-switched network in the mobile battlefield in the network. This program is beneficial in enhancing the radio performance by manufacturing them cheaper, smaller, and resilient to electronic attacks. Then, as a notebook concept of commercial ad-hoc network has emerged in the 1990s with notebook computers as main communications equipment. The subcommittee of IEEE 802.11 had adopted the term "ad-hoc networks" and the research community had started working on the probability of organizing ad-hoc networks in other areas of application. At the same time, work has been done on the enhancement of ad-hoc networks. There are two systems, then developed on the basis of this concept first is Global Mobile Information Systems) and another one is NTDR (Near-term Digital Radio). The idea behind GloMo was to design a distributed office network with Ethernet-type compact disk connectivity everywhere and every time in handheld devices.

  • av Karma Gyatso
    456,-

    Every year, there are more patients with chronic diseases, and they tend to be younger people as the speed of life hastens aging. This is both a big problem for society's health and a problem for your health. Chronic diseases will significantly impact patients' health and quality of life. The effects of some disorders are permanent and even incurable. This places a significant load on the communities and relatives of the patients. Every year, there are more people with chronic diseases, and many of them are younger because of how fast life is moving. This is a serious problem for both personal health and public health that harms society. Chronic diseases will have a substantial influence on patients' health and quality of life, and many chronic Some illnesses have long-lasting, even incurable, impacts. It will bring an enormous burden to the family and community of the patient. In recent years, there is considerable progress has been made in the treatment of illness, and this has had a big impact on the results for chronic diseases, including the monitoring of therapy and clinical diagnosis, amongst other things. The large amounts of obscure health data will be analyzed to extract previously unknown and useful information as well as predict future trends. Corporations are now overwhelmed by the amount of data contained in database systems, consisting of unstructured data such as pictures, video, and sensor data. To discover the data trends and prediction of the scopes, deep learning, and machine learning algorithms are utilized in this case, along with other optimization techniques. We employed a variety of machine learning algorithms for these strategies, including SVM, neural networks, and linear and nonlinear regression techniques. Then, prescriptive analytics may apply the knowledge gained from predictive analytics to prescribe actions based on predicted findings. Machine learning is a type of predictive analytics that helps enterprises move up the business intelligence maturity curve by expanding their usage of predictive analytics to include autonomous, forward-looking decision support instead of just descriptive analytics focusing on the past. Although the technology has been there for a while, many businesses are now taking a fresh look at it due to the excitement surrounding new methods and goods. Machine learning-based analytical solutions frequently function in real-time, giving business a new dimension. Real-time analytics provides information to staff "on the front lines" to improve performance hour-by-hour. However, older models will still provide important reports and analyses to senior decision-makers. Machine learning, a branch of artificial intelligence, train machines to use certain algorithms to analyse, learn from, and provide predictions and recommendations from massive volumes of data. Without human interaction, predictive models may adjust to new data and learn from past iterations to make decisions and outcomes that are ever more consistent and trustworthy.

  • av Songa Shashank Naidu
    510,-

    Mobile tower radiation refers to the electromagnetic radiation emitted by cell phone towers. This radiation includes radio waves, microwaves, and other types of electromagnetic waves that are used for communication between mobile devices and the tower.The potential health effects of mobile tower radiation have been the subject of ongoing research and debate. While some studies suggest that exposure to this type of radiation can have negative health consequences, such as an increased risk of cancer, others suggest that the levels of radiation emitted by mobile towers are too low to cause harm. The effect of mobile phone tower radiation on plants is a relatively new field of observation. It emerged due to concerns over the potential harmful effects of non-ionizing electromagnetic radiation (NIER) emitted by mobile phone towers on living organisms, including plants. The growing popularity of mobile phones and the increasing number of mobile phone towers worldwide have intensified the need for more research on the effects of mobile phone tower radiation on plants. The goal of such research is to determine the potential risks associated with exposure to NIER and to develop measures to minimize or prevent any harmful effects on plant species. Non-ionizing electromagnetic radiation (NIER) refers to a type of electromagnetic radiation that does not have enough energy to ionize atoms or molecules. Examples of NIER include radio waves, microwaves, infrared radiation, and visible light.NIER is generally considered to be less harmful than ionizing radiation, such as X-rays and gamma rays, which have enough energy to ionize atoms and molecules, and can cause damage to cells and DNA. However, there is still ongoing research into the potential health effects of long-term exposure to NIER, particularly from sources such as mobile phones, Wi-Fi, and other wireless devices. Some studies suggest that NIER exposure may have adverse effects on human health, such as increased risk of cancer, cognitive impairment, and other health problems.

  • av Akanksha Bhardwaj
    500,-

    The sixth goal of sustainable development goals emphasizes on ensuring access to water and sanitation for all. 7.6 billion people are dependent upon fresh water resources which account for 2.5% of the total available water on earth. Out of 2.5%, around 2.2% is locked in ice and glaciers, leaving behind a thin proportion of available water for human use. A major proportion of the accessible water resources are affected by pollution in one form or the other. As per the United Nations website, "Worldwide, one in three people do not have access to safe drinking water, two out of five people do not have a basic hand-washing facility with soap and water, and more than 673 million people still practice open defecation". Industrial wastes, sewage from the household, and radioactivity are some of the other main reasons responsible for growing contamination in freshwater resources. As a result of this, water bodies in many regions are contaminated and deemed unfit for human consumption. It is estimated that more than 80 % of wastewater resulting from human activities is discharged into rivers or the sea without the removal of pollutants. One of the notable sources of water pollution are industries, particularly those which utilize a large amount of water like dyeing, textile, metal processing (electroplating), paper & pulp industries, and tanneries are some of the major contributors towards the pollution of water resources. A large amount of organic and inorganic pollutants generated from these industries are dumped in the surrounding water bodies. It is estimated that in India, one-third of the water pollution is contributed by industrial discharge, solid waste and hazardous waste. The pollutants arising from industrial discharge are usually left untreated which poses a serious threat to our environment. These pollutants broadly classified under as organic and inorganic pollutants. The management of these pollutants is challenging and the textile and electroplating industries exist in our neighborhood. Adsorption is a heterogeneous phase phenomenon where a molecule binds to the solid surface of the adsorbent by physical or chemical forces. The process operates by adding adsorbent to wastewater after optimizing the process variables, and contaminants are thereafter adsorbed on the surface of the adsorbent by various physical and chemical forces.

  • av V. V. Shajimon
    500,-

    Human physiology is the analysis of how the human body functions at the cellular, tissue, organ, and system level. It encompasses various physiological systems such as the nervous system, cardiovascular system, respiratory system, endocrine system, digestive system, and renal system. Some key aspects of human physiology include:Homeostasis: The maintenance of a stable internal environment within the body, despite changes in the external environment.Cellular and Molecular Physiology: The study of how cells and molecules in the body function, interact, and communicate with each other.Energy Metabolism: The analysis of how the body uses and produces energy for different physiological processes.Neurophysiology: The analysis of the nervous system and how it functions to control and coordinate body functions.Cardiovascular Physiology: The analysis of the heart and blood vessels, and how they function to transport oxygen, nutrients, and other substances throughout the body.Respiratory Physiology: The analysisof how the lungs and respiratory system function to exchange gases (oxygen and carbon dioxide) with the external environment.Endocrine Physiology: The analysis of how hormones and other signalling molecules regulate various physiological processes in the body.Understanding human physiology is essential for maintaining good health, preventingdiseases, and developing new medical treatments. Human oral mouth physiology refers to the analysis of the structure, function, and mechanisms of the mouth and its related structures. The oral cavity is responsible for a number of functions, including biting, chewing, swallowing, and speaking. Human physiology of oral cavity is important for maintaining good oral health. Practicing good oral hygiene habits such as brushing and flossing regularly, visiting the dentist regularly, and eating a healthy diet can help prevent oral health problems and maintain overall health and well-being. Human physiology and oral health are closely related.

  • av Hariharasitaraman S
    486,-

    The National Institute of Standards and Technology defines cloud computing as a web-based paradigm that facilitates the deployment of flexible and cost- effective business models, through the sharing of expensive resources for computation such as storage, applications, infrastructure, networks, services, data, etc. It is the most sought-after platform for hosting various solutions by service providers due to its scalability to radical changes in the client population. Clients recognize the potential of cloud computing platforms and rely on Cloud Service Providers (CSP) to harness the versatility of this environment in exploiting business solutions, outsourcing sensitive data and computations. Cloud computing has witnessed a phenomenal growth in diverse aspects such as physical infrastructure, deployment models, protocol architectures, security mechanisms, etc., providing reliable solutions in various domains. Cloud computing is viewed as an integrated platform for data storage, deployment of business models, delivery of services, sharing resources, etc., and enabling resource-constrained clients to access various services at low cost. All the cloud computing models must be designed with the following characteristics to provide efficient service. Cloud-based services are customer-centric and designed in tune with the client requirements. Storage is one of the major requirements of enterprises to host applications and store data. Cloud storage service is a new standard of cloud services concerned with the allocation of shared storage among multiple clients simultaneously. Cloud storage providers maintain data in protected physical storage devices distributed across multiple locations. They ensure the availability and accessibility of data to clients through well-designed virtual machine images and web service interfaces. Cloud storage providers are confronted with performance and security concerns and spend huge amounts to meet these requirements.

  • av Sivanand R
    456,-

    Nanoscience and Nanotechnology produce a developing strategic industry with an excellent perspective that is desirably economical. The electrical and electronics sectors, driven by customer interest in multidisciplinary abilities, have crossed the nanoscale limit within the 1990s later as a real outcome. In some regions, electrical manufacturing has been at the forefront of the introduction of nanotechnology concepts in traditional curriculum topics. The past decade has received an impact and has already been built-into several sectors. Tracking days gone by the history of nanotechnology education over the past 15 years to one's future will examine the influence, resources offered and integration of nanoscale concepts. Effective utilization of the deliberate programs may be made use of to demonstrate the tips presented. The world of nanotechnology is rapidly developing in fields such as chemical, physics, biology and electric manufacturing and has attracted tremendous attention. It's wished that it'll raise the minimal utilization of electric, optical and mechanical methods, which have possessed an influence that is certainly considered a raging economy over the years. On top of that, researchers and specialists hope it will probably cause brand new and sensational physics that can act as the basis for new technology. The influence of nanotechnology on electrical engineering also addresses nanoparticle synthesis, handling of the nanomaterials and their particular applications. This effect requires researchers to investigate brand-new habits and frameworks of nanoscale-sized objects for new applications that are electric. Generally speaking, nanotechnology is the substantial study and growth of products, devices, systems and items by exploiting a nanometer scale's structure and measurements with a minimum of one novel material. "Nano" is a word based in Greek and implies half a billionth. Usually, whenever particle sizes are in the range of 1-100nm, they are called nanoparticles or nanomaterials. To offer a sense of this size, let us consider the size: 1nm = 10A = 10-9meters and (eg that is 1µ micron) = 10-4cm = 1000nm. Nanoscience research has problems and deceptions in atomic and molecular structures for their properties that vary significantly from those of major. Fundamentally, nanotechnology makes it possible to supply new, inexpensive and much more efficient services and products inside a range; this is certainly large than numerous building products today. Nanomaterials are nanoscale-sized products. One could describe nanomaterial with its sense; as certainly broadest and having a more significant dimension than a molecule but much smaller than scores of matters. Nanometer-scale materials or Nanocrystals (NCs) typically have different properties, which can be chemical with their volume.

  • av Gohil Nisarg Govindbhai
    406,-

    Microbial pigments are gaining enormous attention owing to their propitious therapeutic properties and natural bright hues made them interesting to use in pharmaceutical, nutraceutical, cosmetic, food and textile industries. These microbial secondary metabolites are not just constricted to natural bio-dyes but some of them also possess splendid antibacterial, antiviral, antimalarial, antidiabetic, anticancer, antioxidant and other biological features. Due to different anthropogenic activities and environmental pollution, the global demand for natural dyes is increased. The demand is forecasted to be worth $ 5.0 billion USD by 2024, growing at a CAGR of 11%. However, the wide acceptance of the microbial pigments is still hindered because of the intense competition with inexpensive synthetic dyes, low-yield and high extraction cost. Microorganisms tend to produce an array of important secondary metabolites for numerous ecological benefits. Though these metabolites are not essential to support the growth of the microbe, these are known to play a vital role in exhibiting phenomenal properties that entices various industries, for instance, pharmaceutical, nutraceutical, paper and textile, cosmetics, and relative fields. Over the last few decades, there has been a surge in the expeditious discovery and synthesis of secondary metabolites with novel properties. Processing and production of commercial goods from agro-based raw materials generate large amounts of agro-industrial waste. Significant developments in the area of microbial biotechnology have opened up new avenues for judicious utilization of the agro-industrial waste towards the synthesis of high value-added bioproducts. Directing the flow of agro- industrial waste generated towards its utilization as the main source of nourishment for the microbes can readily curtail the expense that goes in production of commercially-important metabolites.

  • av Simran Choudhary
    456,-

    From older times the varying security needs resulted in the design of various cryptographic primitives like encryptions, digital signatures, message digest, and digital certificate. The rapid digitization, online commerce, digital currency, cloud computing, Internet of things, and upcoming quantum computer threat moved researchers for finding provably secure cryptographic methods. According to Shor's algorithm given a sufficiently large quantum computer, conventional public-key cryptography techniques (like RSA and Elliptic-Curve Diffie-Hellman) based on the hardness of integer factorization and elliptic curve discrete logarithm problem will be solved in polynomial time. The data encrypted today using conventional method will no longer be forward secured. Thus, the need of hour is to develop and switch to quantum-secure cryptographic primitives to protect present and future communication. The classical cryptosystem majorly involved three types of ciphers for encrypting information. The ciphers are substitution, transposition, and steganography. Tn a substitution cipher, the prevailing letters in the plaintext are substituted with other letters or numbers. The famous and oldest cipher based on substitution technique is Ceaser cipher. The German military during World War TT makes use of substitution techniques in the Enigma machine for encryption. The transposition cipher involves permutation of plaintext letters i.e. the position of letters is shuffled. Stenography is the method of hiding the message in an image, video, or any other media. The substitution and transposition form the basic building block of all the symmetric-key encryption schemes. Further multiple rounds of substitution and transposition provide a strong and hard cryptosystem. As previously stated, cryptography was previously utilized to ensure merely the confidentiality of information transmitted through insecure networks. However, the digital revolution ushered in modern cryptography advancements in the areas of authentication, non- repudiation, and integrity. Digital signatures, digital certificates, hashing, and the message digest are all common primitives. These primitives also allowed modern-day technology such as online banking, credit cards, electronic commerce, and other digital tasks to be secure enough to attract the general public. Identity-based encryption, location-based encryption, quantum-safe cryptography primitives, homomorphic encryption, blockchain, and cryptographic currencies (bitcoin, and ethereum) are some of the modern primitives

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.