Understanding Knowledge Graphs
First AI systems relied heavily on hand-crafted knowledge from their databases. Typical expert systems used this knowledge to reason about input data and produce meaningful results. This knowledge mostly consisted of simple if-then rules, like if temperature sensor values are >100C then turn off the electric kettle. Although machine learning has outperformed rule-based AI in many domains, knowledge bases and graphs are still playing a huge role in many intelligent systems.
Basics
Elementary unit of a knowledge graph is a triplet subject-predicate-object, often denoted as (head, relation, tail) or (h, r, t). Take a look at this one:
Each triplet defines one connection between two entities in the graph. Set of acceptable relationships and entity types defines ontology of the KG, which is also it’s general structure. For example, it may be a graph of geographical objects, biomedical structures or web pages. Given some collection of entries KG allows us to perform inference. As example:
In a large database it may be quite hard to get similar results, but in KG we can use common graph algorithms. In comparison with relational and key-value DBs, knowledge graphs lie somewhere in the middle. They usually don’t have a tight structure, but they still contain relationships between records.
Ordinary query to a knowledge base looks like this: find entity by name “Barack Obama”, find tail of this “Wife” relation, find tail of her “Hometown” relation and return it’s “Name” property. As a result you should probably get “Chicago”.
This idea is also a foundation to Semantic Web. To make all information in the internet connected and understandable to the machines were developed some standards like RDF and Schema.org
KG and Neural Nets
In the last 5 years many researchers used Neural Networks to solve common KG problems. One of them is knowledge graph completion: when you have a graph and you want to fill-in more meaningful relations between existing entities. More formally, given <h,r,?>, <h,?,t> or <?,r,t> you need to calculate the probabilities for potential candidates for missing values. Much research is also done in the direction of embedding entities and relations into vector spaces with NNs similar to Word2Vec.
Additionally, DeepMind’s recent work shows state-of-the art performance in relational reasoning on natural language and images.
Applications
Basically knowledge databases is the leading tool for question-answering systems. Google uses KG to show users additional information to their query besides generic results:
Mobile assistants like Siri, Cortana and others manage knowledge bases with information about available services and user data. Many chatbots also work on those principles. Medical diagnostic systems, maps providers, customer services, and many others.
In summary, I think future developments will allow us to store all our knowledge as a set of connected vectors and use artificial neural networks to reason using this information.