Can AI believe in God? A Parable about Diversity.

Can an AI believe in God? This is a seldom-asked question in the philosophical community that thinks about artificial intelligence (perhaps since that community is primarily rationalist). The question is worth asking. There have been numerous pieces of art that think about the capacity of AI to feel emotion; see the movie Her for a love story between man and AI. If an AI can feel intense emotion for a human being, why couldn’t an AI couldn’t feel a similarly fervent devotion for an abstract diety?

Read More

Learning Models Of Disease

Modern drug discovery remains an artisanal pursuit, driven in large part by luck and expert knowledge. While this approach has worked spectacularly in the past, the last few years have seen a systematic decrease in the number of new drugs discovered for dollar spent. Eroom’s law empirically demonstrates that the number of new drugs per dollar has been falling exponentially year over year. Eroom is of course Moore spelled backward, where Moore’s law observes that transistor densities on computer chips have been increasing exponentially year over year for the past fifty years. The opposite trends in increasing computational power per dollar versus decreasing number of drugs discovered serve as a reminder that naive computation is insufficient to solve hard biological problems (a topic I’ve written about previously). To reverse Eroom’s law, scientists must combine deep biological insights with computational modeling, and I hypothesize that the best path forward is systematically learning causal models of human disease and drug actions from available experimental data.

Read More

The Ferocious Complexity Of The Cell

Fifty years ago, the first molecular dynamics papers allowed scientists to exhaustively simulate systems with a few dozen atoms for picoseconds. Today, due to tremendous gains in computational capability from Moore’s law, and due to significant gains in algorithmic sophisticiation from fifty years of research, modern scientists can simulate systems with hundreds of thousands of atoms for milliseconds at a time. Put another way, scientists today can study systems tens of thousands of times larger, for billion of times longer than they could fifty years go. The effective reach of physical simulation techniques has expanded handleable computational complexity ten-trillion fold. The scope of this achievement should not be underestimated; the advent of these techniques along with the maturation of deep-learning has permitted a host of start-ups (1, 2, 3, etc) to investigate diseases using tools that were hitherto unimaginable.

Read More

Why Antibiotics Are Hard

There’s been a lot of recent attention to the threat of antiobiotic resistence (see this recent NYTimes piece for example). As a quick summary, overuse of antibiotics by doctors and farmers has triggered the evolution of bacteria that are resistant to all available antibiotics. The conclusion follows that more needs to be done to curtail unnecessary antiobiotic use, and to develop novel antibiotics that can cope with the coming onslaught of antibiotic-resistant bacteria.

Read More

Machine Learning for Scientific Datasets

I just read the fascinating paper Could a neuroscientist understand a microprocessor?. The paper simulates a simple microprocessor (the MOS 6502, used in the Apple I and in the Atari video game system) and uses neuroinformatics techniques (mostly statistics/machine-learning) to analyze the simulated microprocessor. More specifically, the authors analyze the connections between transistor on the microprocessor (see Connectomics), ablation of single transistors, covariances of transistor activities, and whole-chip recordings (analogous to whole-brain recordings).

Read More