-
Public Lecture with Google DeepMind's Demis Hassabis
Watch the founder of Google DeepMind's Demis Hassabis' lecture about the future and capabilities of artificial intelligence.
This video was filmed by IET TV.
For more information about the Hassabis talk visit the RTS website to read the event report. -
Future directions of machine learning: Part 2
On 22 May 2015, the Royal Society launched a new series of high-level conferences on major scientific and technical challenges of the next decade. 'Breakthrough Science and Technologies: Transforming our Future' conferences feature cutting-edge science from industry and academia and bring together leading experts from the wider scientific community, industry, government, funding bodies and charities.
The first conference was on the topic of machine learning and was organised by Dr Hermann Hauser KBE FREng FRS and Dr Robert Ghanea-Hercock.
This video session features:
- Professor Steve Furber CBE FREng FRS, “Building Brains” (00:00)
- Dr Demis Hassabis, "General learning algorithms” (1:25:55)
- Simon Knowles, “Machines for intelligence” (28:58)
- Professor Simon Benjamin, “Machine learning as a near-future applications of emerging quantum technologies” (56:05)
Read more about the conference on the Royal Society website: https://royalsociety.org/events/2015/05/breakthrough-science-technologies-machine-learning/ -
Shane Legg on a New Metric for Measuring Macine Intelligence
-
Can A.I. Become More Human? // Gary Marcus, Geometric Intelligence (Hosted by FirstMark Capital)
Gary Marcus, CEO at Geometric Intelligence, presented at FirstMark's Data Driven NYC on January 19, 2016. Marcus' talk focused on ways A.I. is stuck.
Geometric Intelligence is redefining the boundaries of machine learning through innovative, patent-pending techniques that learn more efficiently from less data.
Data Driven NYC is a monthly event covering Big Data and data-driven products and startups, hosted by Matt Turck, partner at FirstMark.
FirstMark is an early stage venture capital firm based in New York City. Find out more about Data Driven NYC at http://datadrivennyc.com and FirstMark Capital at http://firstmarkcap.com. -
'Accelerated Understanding: Deep learning Intelligent Applications and GPUs
-
Private video
This video is private. -
Extracting Knowledge from Text - Pedro Domingos
Title: Extracting Knowledge from Text w/ Tractable Markov Logic & Symmetry-Based Semantic Parsing
Abstract: Building very large commonsense knowledge bases and
reasoning with them is a long-standing dream of AI. Today that
knowledge is available in text; all we have to do is extract it. Text,
however, is extremely messy, noisy, ambiguous, incomplete, and
variable. A formal representation of it needs to be both probabilistic
and relational, either of which leads to intractable inference and
therefore poor scalability. In the first part of this talk I will
describe tractable Markov logic, a language that is restricted enough
to be tractable yet expressive enough to represent much of the
commonsense knowledge contained in text. Even then, transforming text
into a formal representation of its meaning remains a difficult
problem. There is no agreement on what the representation primitives
should be, and labeled data in the form of sentence-meaning pairs for
training a semantic parser is very hard to come by. In the second part
of the talk I will propose a solution to both these problems, based on
concepts from symmetry group theory. A symmetry of a sentence is a
syntactic transformation that does not change its meaning. Learning a
semantic parser for a language is discovering its symmetry group, and
the meaning of a sentence is its orbit under the group (i.e., the set
of all sentences it can be mapped to by composing symmetries).
Preliminary experiments indicate that tractable Markov logic and
symmetry-based semantic parsing can be powerful tools for scalably extracting knowledge from text. -
The Master Algorithm | Pedro Domingos | Talks at Google
Machine learning is the automation of discovery, and it is responsible for making our smartphones work, helping Netflix suggest movies for us to watch, and getting presidents elected. But there is a push to use machine learning to do even more—to cure cancer and AIDS and possibly solve every problem humanity has. Domingos is at the very forefront of the search for the Master Algorithm, a universal learner capable of deriving all knowledge—past, present and future—from data. In this book, he lifts the veil on the usually secretive machine learning industry and details the quest for the Master Algorithm, along with the revolutionary implications such a discovery will have on our society.
Pedro Domingos is a Professor of Computer Science and Engineering at the University of Washington, and he is the cofounder of the International Machine Learning Society.
https://books.google.com/books/about/The_Master_Algorithm.html?id=glUtrgEACAAJ
This Authors at Google talk was hosted by Boris Debic.
eBook
https://play.google.com/store/books/details/Pedro_Domingos_The_Master_Algorithm?id=CPgqCgAAQBAJ -
Open and Exploratory Extraction of Relations and Common Sense from Large Text Corpora - Alan Akbik
Alan Akbik November 10, 2014
Title: Open and Exploratory Extraction of Relations (and Common Sense) from Large Text Corpora
Abstract: The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction (IE). Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this talk, I present an approach and a graphical tool that allows even novice users to quickly and easily define extraction patterns over dependency trees and directly execute them on a very large text corpus. This enables users to explore a corpus for structured information of interest in a highly interactive and data-guided fashion, and allows them to create extractors for those semantic relations they find interesting. I then present a project in which we use Information Extraction to automatically construct a very large common sense knowledge base. This knowledge base - dubbed "The Weltmodell" - contains common sense facts that pertain to proper noun concepts; an example of this is the concept "coffee", for which we know that it is typically drunk by a person or brought by a waiter. I show how we mine such information from very large amounts of text, how we quantify notions such as typicality and similarity, and discuss some ideas how such world knowledge can be used to address reasoning tasks. -
Claudio Delli Bovi: Open Information Extraction: Where Are We Going?
Claudio Delli Bovi
Open Information Extraction: where are we going?
ABSTRACT: The Open Information Extraction (OIE) paradigm has received much attention in the NLP community over the last decade. Since the earliest days, most OIE approaches have been focusing on Web-scale corpora, which raises issues such as massive amounts of noise. Also, OIE systems can be very different in nature and develop their own type inventories, with no portable ontological structure. This talk steps back and explores both issues by presenting two substantially different approaches to the task: in the first we shift the target of a full-fledged OIE pipeline to a relatively small, dense corpus of definitional knowledge; in the second we try to make sense of different OIE outputs by merging them into a single, unified and fully disambiguated knowledge repository. -
Dave Snowden | Managing under conditions of uncertainty | State of the Net 2014
State of the Net 2014 - Trieste (Italy), June 12th-14th
Keynote speech
Managing under conditions of uncertainty: lessons from the natural sciences
Dave Snowden, chief scientific officer at Cognitive Edge
http://sotn.it/speakers/dave-snowden/
State of the Net
http://sotn.it
http://twitter.com/stateofthenet
http://facebook.com/stateofthenet
#sotn14 -
Debate: "Does AI Need More Innate Machinery?" (Yann LeCun, Gary Marcus)
Debate between Yann LeCun and Gary Marcus at NYU, October 5 2017. Moderated by David Chalmers. Sponsored by the NYU Center for Mind, Brain and Consciousness.