Marknadens största urval
Snabb leverans

Böcker i Synthesis Lectures on Algorithms and Software in Engineering-serien

Filter
Filter
Sortera efterSortera Serieföljd
  • av Peter Knee
    480,-

    Although the field of sparse representations is relatively new, research activities in academic and industrial research labs are already producing encouraging results. The sparse signal or parameter model motivated several researchers and practitioners to explore high complexity/wide bandwidth applications such as Digital TV, MRI processing, and certain defense applications. The potential signal processing advancements in this area may influence radar technologies. This book presents the basic mathematical concepts along with a number of useful MATLAB(R) examples to emphasize the practical implementations both inside and outside the radar field. Table of Contents: Radar Systems: A Signal Processing Perspective / Introduction to Sparse Representations / Dimensionality Reduction / Radar Signal Processing Fundamentals / Sparse Representations in Radar

  • av Scott A. Whitmire
    730,-

    Software development is hard, but creating good software is even harder, especially if your main job is something other than developing software. Engineer Your Software! opens the world of software engineering, weaving engineering techniques and measurement into software development activities. Focusing on architecture and design, Engineer Your Software! claims that no matter how you write software, design and engineering matter and can be applied at any point in the process. Engineer Your Software! provides advice, patterns, design criteria, measures, and techniques that will help you get it right the first time. Engineer Your Software! also provides solutions to many vexing issues that developers run into time and time again. Developed over 40 years of creating large software applications, these lessons are sprinkled with real-world examples from actual software projects. Along the way, the author describes common design principles and design patterns that can make life a lot easier for anyone tasked with writing anything from a simple script to the largest enterprise-scale systems.

  • av Ioannis Kyriakides
    386,-

    The adaptive configuration of nodes in a sensor network has the potential to improve sequential estimation performance by intelligently allocating limited sensor network resources. In addition, the use of heterogeneous sensing nodes provides a diversity of information that also enhances estimation performance. This work reviews cognitive systems and presents a cognitive fusion framework for sequential state estimation using adaptive configuration of heterogeneous sensing nodes and heterogeneous data fusion. This work also provides an application of cognitive fusion to the sequential estimation problem of target tracking using foveal and radar sensors.

  • av Vimal Kumar
    666,-

    The sensor cloud is a new model of computing paradigm for Wireless Sensor Networks (WSNs), which facilitates resource sharing and provides a platform to integrate different sensor networks where multiple users can build their own sensing applications at the same time. It enables a multi-user on-demand sensory system, where computing, sensing, and wireless network resources are shared among applications. Therefore, it has inherent challenges for providing security and privacy across the sensor cloud infrastructure. With the integration of WSNs with different ownerships, and users running a variety of applications including their own code, there is a need for a risk assessment mechanism to estimate the likelihood and impact of attacks on the life of the network. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly, the code disseminated to the sensors within the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex and energy-intensive measures, the lightweight schemes for integrity, security, and privacy of the data have to be redesigned.The book starts with the motivation and architecture discussion of a sensor cloud. Due to the integration of multiple WSNs running user-owned applications and code, the possibility of attacks is more likely. Thus, next, we discuss a risk assessment mechanism to estimate the likelihood and impact of attacks on these WSNs in a sensor cloud using a framework that allows the security administrator to better understand the threats present and take necessary actions. Then, we discuss integrity and privacy preserving data aggregation in a sensor cloud as it becomes harder to protect data in this environment. Integrity of data can be compromised as it becomes easier for an attacker to inject false data in a sensor cloud, and due to hop by hop nature, privacy of data could be leaked as well. Next, the book discusses a fine-grained access control scheme which works on the secure aggregated data in a sensor cloud. This scheme uses Attribute Based Encryption (ABE) to achieve the objective. Furthermore, to securely and efficiently disseminate application code in sensor cloud, we present a secure code dissemination algorithm which first reduces the amount of code to be transmitted from the base station to the sensor nodes. It then uses Symmetric Proxy Re-encryption along with Bloom filters and Hash-based Message Authentication Code (HMACs) to protect the code against eavesdropping and false code injection attacks.

  • av Brian Mears
    446,-

    The availability of inexpensive, custom, highly integrated circuits is enabling some very powerful systems that bring together sensors, smart phones, wearables, cloud computing, and other technologies. To design these types of complex systems we are advocating a top-down simulation methodology to identify problems early. This approach enables software development to start prior to expensive chip and hardware development. We call the overall approach virtual design. This book explains why simulation has become important for chip design and provides an introduction to some of the simulation methods used. The audio lifelogging research project demonstrates the virtual design process in practice. The goals of this book are to:explain how silicon design has become more closely involved with system design;show how virtual design enables top down design;explain the utility of simulation at different abstraction levels;show how open source simulation software was used in audio lifelogging.The target audience for this book are faculty, engineers, and students who are interested in developing digital devices for Internet of Things (IoT) types of products.

  • av Christos P. Loizou
    716,-

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters: linear despeckle filtering, non-linear despeckle filtering, diffusion despeckle filtering, and wavelet despeckle filtering. The goal of this book (book 2 of 2 books) is to demonstrate the use of a comparative evaluation framework based on these despeckle filters (introduced on book 1) on cardiovascular ultrasound image and video processing and analysis. More specifically, the despeckle filtering evaluation framework is based on texture analysis, image quality evaluation metrics, and visual evaluation by experts. This framework is applied in cardiovascular ultrasound image/video processing on the tasks of segmentation and structural measurements, texture analysis for differentiating between two classes (i.e. normal vs disease) and for efficient encoding for mobile applications. It is shown that despeckle noise reduction improved segmentation and measurement (of tissue structure investigated), increased the texture feature distance between normal and abnormal tissue, improved image/video quality evaluation and perception and produced significantly lower bitrates in video encoding. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two monographs.

  • av Christos P. Loizou
    796,-

    It is well known that speckle is a multiplicative noise that degrades image and video quality and the visual expert's evaluation in ultrasound imaging and video. This necessitates the need for robust despeckling image and video techniques for both routine clinical practice and tele-consultation. The goal for this book (book 1 of 2 books) is to introduce the problem of speckle occurring in ultrasound image and video as well as the theoretical background (equations), the algorithmic steps, and the MATLABTM code for the following group of despeckle filters: linear filtering, nonlinear filtering, anisotropic diffusion filtering, and wavelet filtering. This book proposes a comparative evaluation framework of these despeckle filters based on texture analysis, image quality evaluation metrics, and visual evaluation by medical experts. Despeckle noise reduction through the application of these filters will improve the visual observation quality or it may be used as a pre-processing step for further automated analysis, such as image and video segmentation, and texture characterization in ultrasound cardiovascular imaging, as well as in bandwidth reduction in ultrasound video transmission for telemedicine applications. The aforementioned topics will be covered in detail in the companion book to this one. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two books. Table of Contents: Preface / Acknowledgments / List of Symbols / List of Abbreviations / Introduction to Speckle Noise in Ultrasound Imaging and Video / Basics of Evaluation Methodology / Linear Despeckle Filtering / Nonlinear Despeckle Filtering / Diffusion Despeckle Filtering / Wavelet Despeckle Filtering / Evaluation of Despeckle Filtering / Summary and Future Directions / References / Authors' Biographies

  • av Henry Himberg
    560,-

    Augmented reality (AR) systems are often used to superimpose virtual objects or information on a scene to improve situational awareness. Delays in the display system or inaccurate registration of objects destroy the sense of immersion a user experiences when using AR systems. AC electromagnetic trackers are ideal for these applications when combined with head orientation prediction to compensate for display system delays. Unfortunately, these trackers do not perform well in environments that contain conductive or ferrous materials due to magnetic field distortion without expensive calibration techniques. In our work we focus on both the prediction and distortion compensation aspects of this application, developing a "e;small footprint"e; predictive filter for display lag compensation and a simplified calibration system for AC magnetic trackers. In the first phase of our study we presented a novel method of tracking angular head velocity from quaternion orientation using an Extended Kalman Filter in both single model (DQEKF) and multiple model (MMDQ) implementations. In the second phase of our work we have developed a new method of mapping the magnetic field generated by the tracker without high precision measurement equipment. This method uses simple fixtures with multiple sensors in a rigid geometry to collect magnetic field data in the tracking volume. We have developed a new algorithm to process the collected data and generate a map of the magnetic field distortion that can be used to compensate distorted measurement data. Table of Contents: List of Tables / Preface / Acknowledgments / Delta Quaternion Extended Kalman Filter / Multiple Model Delta Quaternion Filter / Interpolation Volume Calibration / Conclusion / References / Authors' Biographies

  • av Visar Berisha
    480,-

    Bandwidth extension of speech is used in the International Telecommunication Union G.729.1 standard in which the narrowband bitstream is combined with quantized high-band parameters. Although this system produces high-quality wideband speech, the additional bits used to represent the high band can be further reduced. In addition to the algorithm used in the G.729.1 standard, bandwidth extension methods based on spectrum prediction have also been proposed. Although these algorithms do not require additional bits, they perform poorly when the correlation between the low and the high band is weak. In this book, two wideband speech coding algorithms that rely on bandwidth extension are developed. The algorithms operate as wrappers around existing narrowband compression schemes. More specifically, in these algorithms, the low band is encoded using an existing toll-quality narrowband system, whereas the high band is generated using the proposed extension techniques. The first method relies only on transmitted high-band information to generate the wideband speech. The second algorithm uses a constrained minimum mean square error estimator that combines transmitted high-band envelope information with a predictive scheme driven by narrowband features. Both algorithms make use of novel perceptual models based on loudness that determine optimum quantization strategies for wideband recovery and synthesis. Objective and subjective evaluations reveal that the proposed system performs at a lower average bit rate while improving speech quality when compared to other similar algorithms.

  • av Christine M. Zwart
    480,-

    Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable computational expense. As the volume and dimensionality of data that motion estimators are applied to continue to grow, that expense becomes more and more costly. Control grid motion estimators based on optical flow can accomplish motion estimation with flexibility similar to pure optical flow, but at a fraction of the computational expense. Control grid methods also offer the added benefit of representing motion far more compactly than pure optical flow. This booklet explores control grid motion estimation and provides implementations of the approach that apply to data of multiple dimensionalities. Important current applications of control grid methods including registration and interpolation are also developed. Table of Contents: Introduction / Control Grid Interpolation (CGI) / Application of CGI to Registration Problems / Application of CGI to Interpolation Problems / Discussion and Conclusions

  • av Juan Andrade
    666,-

    Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence.Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others.There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception.Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.

  • av Andreas Spanias
    526,-

    The MPEG-1 Layer III (MP3) algorithm is one of the most successful audio formats for consumer audio storage and for transfer and playback of music on digital audio players. The MP3 compression standard along with the AAC (Advanced Audio Coding) algorithm are associated with the most successful music players of the last decade. This book describes the fundamentals and the MATLAB implementation details of the MP3 algorithm. Several of the tedious processes in MP3 are supported by demonstrations using MATLAB software. The book presents the theoretical concepts and algorithms used in the MP3 standard. The implementation details and simulations with MATLAB complement the theoretical principles. The extensive list of references enables the reader to perform a more detailed study on specific aspects of the algorithm and gain exposure to advancements in perceptual coding. Table of Contents: Introduction / Analysis Subband Filter Bank / Psychoacoustic Model II / MDCT / Bit Allocation, Quantization and Coding / Decoder

  • av Narayan Kovvali
    480,-

    Gaussian quadrature is a powerful technique for numerical integration that falls under the broad category of spectral methods. The purpose of this work is to provide an introduction to the theory and practice of Gaussian quadrature. We study the approximation theory of trigonometric and orthogonal polynomials and related functions and examine the analytical framework of Gaussian quadrature. We discuss Gaussian quadrature for bandlimited functions, a topic inspired by some recent developments in the analysis of prolate spheroidal wave functions. Algorithms for the computation of the quadrature nodes and weights are described. Several applications of Gaussian quadrature are given, ranging from the evaluation of special functions to pseudospectral methods for solving differential equations. Software realization of select algorithms is provided. Table of Contents: Introduction / Approximating with Polynomials and Related Functions / Gaussian Quadrature / Applications / Links to Mathematical Software

  • av Venkatraman Atti
    526,-

    From the early pulse code modulation-based coders to some of the recent multi-rate wideband speech coding standards, the area of speech coding made several significant strides with an objective to attain high quality of speech at the lowest possible bit rate. This book presents some of the recent advances in linear prediction (LP)-based speech analysis that employ perceptual models for narrow- and wide-band speech coding. The LP analysis-synthesis framework has been successful for speech coding because it fits well the source-system paradigm for speech synthesis. Limitations associated with the conventional LP have been studied extensively, and several extensions to LP-based analysis-synthesis have been proposed, e.g., the discrete all-pole modeling, the perceptual LP, the warped LP, the LP with modified filter structures, the IIR-based pure LP, all-pole modeling using the weighted-sum of LSP polynomials, the LP for low frequency emphasis, and the cascade-form LP. These extensions can be classified as algorithms that either attempt to improve the LP spectral envelope fitting performance or embed perceptual models in the LP. The first half of the book reviews some of the recent developments in predictive modeling of speech with the help of Matlab(TM) Simulation examples. Advantages of integrating perceptual models in low bit rate speech coding depend on the accuracy of these models to mimic the human performance and, more importantly, on the achievable "e;coding gains"e; and "e;computational overhead"e; associated with these physiological models. Methods that exploit the masking properties of the human ear in speech coding standards, even today, are largely based on concepts introduced by Schroeder and Atal in 1979. For example, a simple approach employed in speech coding standards is to use a perceptual weighting filter to shape the quantization noise according to the masking properties of the human ear. The second half of the book reviews some of the recent developments in perceptual modeling of speech (e.g., masking threshold, psychoacoustic models, auditory excitation pattern, and loudness) with the help of Matlab simulations. Supplementary material including Matlab programs and simulation examples presented in this book can also be accessed here. Table of Contents: Introduction / Predictive Modeling of Speech / Perceptual Modeling of Speech

  • av Ioannis Kyriakides
    480,-

    Recent innovations in modern radar for designing transmitted waveforms, coupled with new algorithms for adaptively selecting the waveform parameters at each time step, have resulted in improvements in tracking performance. Of particular interest are waveforms that can be mathematically designed to have reduced ambiguity function sidelobes, as their use can lead to an increase in the target state estimation accuracy. Moreover, adaptively positioning the sidelobes can reveal weak target returns by reducing interference from stronger targets. The manuscript provides an overview of recent advances in the design of multicarrier phase-coded waveforms based on Bjorck constant-amplitude zero-autocorrelation (CAZAC) sequences for use in an adaptive waveform selection scheme for mutliple target tracking. The adaptive waveform design is formulated using sequential Monte Carlo techniques that need to be matched to the high resolution measurements. The work will be of interest to both practitioners and researchers in radar as well as to researchers in other applications where high resolution measurements can have significant benefits. Table of Contents: Introduction / Radar Waveform Design / Target Tracking with a Particle Filter / Single Target tracking with LFM and CAZAC Sequences / Multiple Target Tracking / Conclusions

  • av Adarsh Narasimhamurthy
    446,-

    Orthogonal Frequency Division Multiplexing (OFDM) systems are widely used in the standards for digital audio/video broadcasting, WiFi and WiMax. Being a frequency-domain approach to communications, OFDM has important advantages in dealing with the frequency-selective nature of high data rate wireless communication channels. As the needs for operating with higher data rates become more pressing, OFDM systems have emerged as an effective physical-layer solution. This short monograph is intended as a tutorial which highlights the deleterious aspects of the wireless channel and presents why OFDM is a good choice as a modulation that can transmit at high data rates. The system-level approach we shall pursue will also point out the disadvantages of OFDM systems especially in the context of peak to average ratio, and carrier frequency synchronization. Finally, simulation of OFDM systems will be given due prominence. Simple MATLAB programs are provided for bit error rate simulation using a discrete-time OFDM representation. Software is also provided to simulate the effects of inter-block-interference, inter-carrier-interference and signal clipping on the error rate performance. Different components of the OFDM system are described, and detailed implementation notes are provided for the programs. The program can be downloaded here. Table of Contents: Introduction / Modeling Wireless Channels / Baseband OFDM System / Carrier Frequency Offset / Peak to Average Power Ratio / Simulation of the Performance of OFDM Systems / Conclusions

  • av Kostas Kokkinakis
    526,-

    With human-computer interactions and hands-free communications becoming overwhelmingly important in the new millennium, recent research efforts have been increasingly focusing on state-of-the-art multi-microphone signal processing solutions to improve speech intelligibility in adverse environments. One such prominent statistical signal processing technique is blind signal separation (BSS). BSS was first introduced in the early 1990s and quickly emerged as an area of intense research activity showing huge potential in numerous applications. BSS comprises the task of 'blindly' recovering a set of unknown signals, the so-called sources from their observed mixtures, based on very little to almost no prior knowledge about the source characteristics or the mixing structure. The goal of BSS is to process multi-sensory observations of an inaccessible set of signals in a manner that reveals their individual (and original) form, by exploiting the spatial and temporal diversity, readily accessible through a multi-microphone configuration. Proceeding blindly exhibits a number of advantages, since assumptions about the room configuration and the source-to-sensor geometry can be relaxed without affecting overall efficiency. This booklet investigates one of the most commercially attractive applications of BSS, which is the simultaneous recovery of signals inside a reverberant (naturally echoing) environment, using two (or more) microphones. In this paradigm, each microphone captures not only the direct contributions from each source, but also several reflected copies of the original signals at different propagation delays. These recordings are referred to as the convolutive mixtures of the original sources. The goal of this booklet in the lecture series is to provide insight on recent advances in algorithms, which are ideally suited for blind signal separation of convolutive speech mixtures. More importantly, specific emphasis is given in practical applications of the developed BSS algorithms associated with real-life scenarios. The developed algorithms are put in the context of modern DSP devices, such as hearing aids and cochlear implants, where design requirements dictate low power consumption and call for portability and compact size. Along these lines, this booklet focuses on modern BSS algorithms which address (1) the limited amount of processing power and (2) the small number of microphones available to the end-user. Table of Contents: Fundamentals of blind signal separation / Modern blind signal separation algorithms / Application of blind signal processing strategies to noise reduction for the hearing-impaired / Conclusions and future challenges / Bibliography

  • av Sandeep Prasad Sira
    446,-

    Recent advances in sensor technology and information processing afford a new flexibility in the design of waveforms for agile sensing. Sensors are now developed with the ability to dynamically choose their transmit or receive waveforms in order to optimize an objective cost function. This has exposed a new paradigm of significant performance improvements in active sensing: dynamic waveform adaptation to environment conditions, target structures, or information features. The manuscript provides a review of recent advances in waveform-agile sensing for target tracking applications. A dynamic waveform selection and configuration scheme is developed for two active sensors that track one or multiple mobile targets. A detailed description of two sequential Monte Carlo algorithms for agile tracking are presented, together with relevant Matlab code and simulation studies, to demonstrate the benefits of dynamic waveform adaptation. The work will be of interest not only to practitioners of radar and sonar, but also other applications where waveforms can be dynamically designed, such as communications and biosensing. Table of Contents: Waveform-Agile Target Tracking Application Formulation / Dynamic Waveform Selection with Application to Narrowband and Wideband Environments / Dynamic Waveform Selection for Tracking in Clutter / Conclusions / CRLB Evaluation for Gaussian Envelope GFM Chirp from the Ambiguity Function / CRLB Evaluation from the Complex Envelope

  • av Andreas Spanias & Karthikeyan Ramamurthy
    526,-

    This book describes several modules of the Code Excited Linear Prediction (CELP) algorithm. The authors use the Federal Standard-1016 CELP MATLAB® software to describe in detail several functions and parameter computations associated with analysis-by-synthesis linear prediction. The book begins with a description of the basics of linear prediction followed by an overview of the FS-1016 CELP algorithm. Subsequent chapters describe the various modules of the CELP algorithm in detail. In each chapter, an overall functional description of CELP modules is provided along with detailed illustrations of their MATLAB® implementation. Several code examples and plots are provided to highlight some of the key CELP concepts. Link to MATLAB® code found within the book Table of Contents: Introduction to Linear Predictive Coding / Autocorrelation Analysis and Linear Prediction / Line Spectral Frequency Computation / Spectral Distortion / The Codebook Search / The FS-1016 Decoder

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.