'Vital reading. This is the book on artificial intelligence we need right now.' Mike Krieger, cofounder of Instagram Artificial intelligence is rapidly dominating every aspect of our modern lives influencing the news we consume, whether we get a mortgage, and even which friends wish us happy birthday. But as algorithms make ever more decisions on our behalf, how do we ensure they do what we want? And fairly? This conundrum - dubbed 'The Alignment Problem' by experts - is the subject of this timely and important book. From the AI program which cheats at computer games to the sexist algorithm behind Google Translate, bestselling author Brian Christian explains how, as AI develops, we rapidly approach a collision between artificial intelligence and ethics. If we stand by, we face a future with unregulated algorithms that propagate our biases - and worse - violate our most sacred values. Urgent and fascinating, this is an accessible primer to the most important issue facing AI researchers today.
This thesis provides a novel conceptual contribution to artificial intelligence (AI) safety by finding a tractable method for solving the AI value alignment problem: the creation of more complete audience models using narrative information extraction techniques from the field of computational narratology. With a thorough analysis of results from the field of computational narratology, I show that research into narrative for autonomous agents can contribute to solving the AI value alignment problem. In short, we can create artificial intelligence systems that automatically act in the best interest of humanity by teaching them to read and understand stories.The novelty of this thesis lies in the combination of two disparate academic fields: AI safety and computational narratology. Reviewing the current work and ongoing issues in both fields, I show that methods used in computational narratology to model stories can be used to solve the value alignment problem from the field of AI safety. In Chapter 2, I show why value alignment is the best solution to the problem of controlling intelligent agents. In Chapter 2, I discuss how stories encode tacit human values, and how the creation of a better audience model will contribute to solving the value alignment problem. In Chapter 3, I present two case studies providing evidence that value alignment from narrative information extraction is not only viable, but effective. Finally, I conclude by acknowledging the shortcomings of the field and pressing areas of future work.
This dissertation addresses the problem of searching a target within a region by sequential queries with noisy responses. A Bayesian decision maker is responsible to collect observation samples so as to enhance his knowledge about the true location in a speedy manner. When the response is noiseless, the classical binary search solves such a problem optimally. Noisy binary search, on the other hand, has also been formulated and studied extensively in theory over the past 60 years since Horstein (1963). However, the algorithms developed in noisy binary search problem find limited practical applications in real-world engineer problem. Motivated by bridging theory and practice, we formulate the noisy binary search problem by identifying practical scenarios and constraints that naturally rises with practical applications such as spectrum sensing in cognitive communication, AoA estimation by adaptive beamforming in large antenna array system, visual image inspection, bit-wise data transmission, heavy hitter detection in network system, etc. The first part of the dissertation (Chapter 2) focuses on theoretical understanding and developing noisy binary search algorithms under those practical constraints. Three algorithms sortPM, dyaPM, hiePM are proposed. Using the extrinsic Jensen Divergence from information theory, we provide upper bound for the expected search time of each of the algorithms. By comparing with an information theoretic lower bound, we demonstrate the asymptotic optimality and suboptimality of the proposed algorithms (asymptotic in the resolution of the target location). The second part of the dissertation applies the proposed hiePM to practical problems. In particular, Chapter 3 demonstrates the application of hiePM on the data transmission problem with noiseless feedback. The dyadic hierarchical query area of hiePM relates directly to the bit representation of the data stream. This simplifies significantly the corresponding adaptive encoding scheme and allows a bit-wise encoding. Chapter 4 considers the initial beam alignment problem in 5G mmWave communication using beamforming. With a single-path channel model, the problem is reduced to actively searching the Angle-of-Arrival (AoA) of the signal sent from the user to the Base Station (BS). hiePM is applied to adaptively and sequentially choose the beamforming from the hierarchical beamforming codebook. The proposed algorithm is compared to prior works of initial beam alignment that employs linear beam search, repeat binary search, or random beam search, respectively, and gives the state-of-art performance in terms of both AoA estimation error at the end of the initial alignment, and the spectral efficiency during the communication phase.
Interference Alignment: A New Look at Signal Dimensions in a Communication Network provides both a tutorial and a survey of the state-of-art on the topic.
Covers the fundamentals and techniques of multiple biological sequence alignment and analysis, and shows readers how to choose the appropriate sequence analysis tools for their tasks This book describes the traditional and modern approaches in biological sequence alignment and homology search. This book contains 11 chapters, with Chapter 1 providing basic information on biological sequences. Next, Chapter 2 contains fundamentals in pair-wise sequence alignment, while Chapters 3 and 4 examine popular existing quantitative models and practical clustering techniques that have been used in multiple sequence alignment. Chapter 5 describes, characterizes and relates many multiple sequence alignment models. Chapter 6 describes how traditionally phylogenetic trees have been constructed, and available sequence knowledge bases can be used to improve the accuracy of reconstructing phylogeny trees. Chapter 7 covers the latest methods developed to improve the run-time efficiency of multiple sequence alignment. Next, Chapter 8 covers several popular existing multiple sequence alignment server and services, and Chapter 9 examines several multiple sequence alignment techniques that have been developed to handle short sequences (reads) produced by the Next Generation Sequencing technique (NSG). Chapter 10 describes a Bioinformatics application using multiple sequence alignment of short reads or whole genomes as input. Lastly, Chapter 11 provides a review of RNA and protein secondary structure prediction using the evolution information inferred from multiple sequence alignments. • Covers the full spectrum of the field, from alignment algorithms to scoring methods, practical techniques, and alignment tools and their evaluations • Describes theories and developments of scoring functions and scoring matrices •Examines phylogeny estimation and large-scale homology search Multiple Biological Sequence Alignment: Scoring Functions, Algorithms and Applications is a reference for researchers, engineers, graduate and post-graduate students in bioinformatics, and system biology and molecular biologists. Ken Nguyen, PhD, is an associate professor at Clayton State University, GA, USA. He received his PhD, MSc and BSc degrees in computer science all from Georgia State University. His research interests are in databases, parallel and distribute computing and bioinformatics. He was a Molecular Basis of Disease fellow at Georgia State and is the recipient of the highest graduate honor at Georgia State, the William M. Suttles Graduate Fellowship. Xuan Guo, PhD, is a postdoctoral associate at Oak Ridge National Lab, USA. He received his PhD degree in computer science from Georgia State University in 2015. His research interests are in bioinformatics, machine leaning, and cloud computing. He is an editorial assistant of International Journal of Bioinformatics Research and Applications. Yi Pan, PhD, is a Regents' Professor of Computer Science and an Interim Associate Dean and Chair of Biology at Georgia State University. He received his BE and ME in computer engineering from Tsinghua University in China and his PhD in computer science from the University of Pittsburgh. Dr. Pan's research interests include parallel and distributed computing, optical networks, wireless networks and bioinformatics. He has published more than 180 journal papers with about 60 papers published in various IEEE/ACM journals. He is co-editor along with Albert Y. Zomaya of the Wiley Series in Bioinformatics.
A leading artificial intelligence researcher lays out a new approach to AI that will enable people to coexist successfully with increasingly intelligent machines.
This two-volume set (CCIS 134 and CCIS 135) constitutes the refereed proceedings of the International Conference on Intelligent Computing and Information Science, ICICIS2011, held in Chongqing, China, in January 2011. The 226 revised full papers presented in both volumes, CCIS 134 and CCIS 135, were carefully reviewed and selected from over 600 initial submissions. The papers provide the reader with a broad overview of the latest advances in the field of intelligent computing and information science.
Medical imaging is the cornerstone for diagnosis, treatment planning, and follow-up for many medical conditions, and this reference on dental and facial procedures describes a clever digital combination of two successive images that may provide much more information than the naked eye can obtain from both images separately. Using mutual information (MI) alignment/registration criteria, the discussion covers a wide variety of clinical applications for which this technology is helpful.
Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication. Chatterjee, Siddhartha and Gilbert, John R. and Oliker, Leonid and Schreiber, Robert and Sheffler, Thomas J. Ames Research Center NAS2-13721...
The Alignment Effect offers managers a systematic blueprint for demanding real accountability and bottom-line business results from their IT investments. Using actual case studies, Faisal Hoque introduces Business Technology Management, a comprehensive approach to aligning technology with business objectives, increasing the efficiency of technology investments, and dramatically reducing the financial and operational risks associated with business and technical change.