Applying hyperbolic 3-manifolds to quantify bias and measure relationships
A NEW MATHEMATICAL APPROACH: HYPERBOLIC 3-MANIFOLD TOPOLOGY
Solves a different class of problems than Euclidian, neural network-based approaches
- Layered Vector Cluster Pattern with Trim
- Geometric Associative Memory
- Adaptive System Reasoning
- Decentralized Architecture
Relational AI understands individual uniqueness by mapping relationships between what they say, do, and feel over time and in different contexts. A mapping uses signals from multiple sources to create a complex cultural, moral and emotional lens, or benchmark, for each person. Each lens represents that individual’s view of the world, which allows for comparisons among individuals, groups, or concepts in order to understand emotional, moral and cultural differences.
How It Works
Layered Vector Cluster Pattern with Trim (LVCPT) starts by creating layers of associations between signals and meaning. What makes LVCPT different is it preserves both global and local properties of signals. This is necessary for our Geometric Associative Memory to have Adaptive System Reasoning (see related tabs to understand benefits and how this is done).
The highest LVCPT layer, the Global Vector Layer (GloVe), extracts internal system (such as emotion) and external system (such as opinion and action) signals and associates them with higher level concepts, entities, or meaning. It is similar to Stanford’s Global Vector Word Representation, which understands the universal meaning of words, but goes several steps further by (1) processing multi-modal signals (verbal and non-verbal) instead of just words and (2) allowing for multiple lenses of the same item within a single layer.
All other layers are Local Vector Layers (LoVe) which represent clusters of locally related items, such as a single 3-item grouping, token, token sequence, vector, concept, manifold or individual. The global layer’s universality means that it can help explain our understanding of the relationships within the local layer. That’s what makes this process so powerful and why our AI offers significant improvement to other AI: the multi-dimensionality of the global layer means we can use many different signals to build complex contextual understanding, and then use this contextual understanding to discover more accurate, less costly associations at the local layer.
Leaps Ahead of Common AI
Common AI uses Euclidean transformations, limiting the number of signals that can be processed and restricting the world view to a single lens. This type of Euclidean AI is unable to summarize higher level concepts nor understand how it relates to other lenses.
By contrast, Relational AI can handle signals from multiple sources and remember multiple perspectives and layers of meaning because it relies on hyperbolic 3-manifold topology with Geometric Associative Memory and Adaptive System Reasoning. Relational AI’s layered vectorization approach means our AI learns not just from the data we give it at the local layer, but also from relational higher level ideas, concepts, entities or system meaning that we apply at the global layer. This is how Relational AI improves common AI with corrective feedback to over 90% accuracy – our AI processes more data with higher dimensionality and identifies non-linear, complex relationships.
Relational AI is mostly unsupervised machine learning. We say “mostly unsupervised” because, like a car engine, it needs a starter to get things moving but then operates on its own. The “starter” is guidance from sparse explicit or implicit summaries of higher level concepts, entities, or system meaning expressed as position coordinates within a hyperbolic 3-manifold. Using this guidance, we apply Geometric Associative Memory to classify non-linear signal patterns and associate meaning to them. Geometric Associative Memory powers Adaptive System Reasoning (see related tabs to learn about the benefits and how this works) .
How It Works
We map items – for example, concepts, people, experiences, and things – to a geometric hyperbolic 3-manifold. Our algorithm defines the manifold with a cluster of signals – for example, a spoken word, a voice emotion, and an action. It then samples uniformly at random from the Geometric Associative Memory to discover the closest manifold associated with the items. That’s how our AI interprets context.
Shared Objectivity, Not Statistical Data Science
Using hyperbolic lenses, Relational AI utilizes the inherent structure of manifolds to expand upon, and make inferences about, real world concepts, such as similarity among people within a community or the difference in understanding between two cultural and moral perspectives.
Relational AI’s power stems from the adaptability of the system. With countless hyperbolic 3-manifolds available at any single point in time, we can view millions of multi-modal signals and discover the non-linear relationships representing an individual’s cultural, moral, political, and scientific beliefs.
Not only can we map a single point in time, but we can also map a sequence of temporal points, represented as a collection of manifolds. This sequencing effect is similar to that of video sequences created from pixels on each picture frames taken every millisecond and assembled in temporal order. Where it differs is that our sequences succinctly summarize much of the local-level (LoVe) structure and context (Layered GloVe) of a given moment (for example, audio (“That’s so cool!”), image (excited face, hand pointing at rocket launch), emotion (high positive excitement and arousal), and physiological data (heart rate). Relational AI has the flexibility to apply hyperbolic 3-manifold learning algorithms to these layers of increasingly complex combinations of points and sequences.
Relational AI has what we call “Adaptive System Reasoning.” It is Smart Managed Services for continuously improved relational intelligence on IoT Devices and AI. Adaptive System Reasoning consists of two processing cycles: a “Fast Cycle” and a “Slow Cycle.” The Fast Cycle discovers from noise what triggers each individual in relationship to the world around them (such as other people, places and things) and personalization. The Slow Cycle adds scientific reasoning in order to explain the specific trigger and the cause/affect on that complex adaptive system around it. It is made possible by our hyperbolic 3-manifold design, Geometric Associative Memory (see related tabs to understand benefits and how it is done) and Distributed Architecture (see related tabs to understand benefits and how it is done).
How It Works
The Fast Cycle makes sense of multi-modal signals from decentralized inputs. It glues together heterogeneous datasets (for example, LinkedIn/CRM/Smart City; Office 365/Building IoT, etc.) to understand an individual (in the form of a local vector system: LoVe manifold) in relation to the complex adaptive system around her (in the form of a global vector system: GLoVe manifold). Using GAM and our Distributed Architecture, the Fast Cycle is a Relational Intelligence vector run-time cycle. It is decentralized and data-driven. The Relational Intelligence AI maps LoVe graphs into personal manifolds relation to GloVe graphs and scientific manifolds. In addition, Individuals own their data, resulting in greater security and the ability to opt-in and greater control over data sharing and transfers.
You can think of the Slow Cycle as scaffolding, a starting point that models known scientific patterns codified in Wikipedia and web documents in public access journals (such as researchgate.net and arxiv.org). The Slow Cycle creates structure labels so that we can associate unstructured, real-world patterns discovered by the Fast Cycle into an objective interpretation of context. This results in a less biased interpretation of the data than that provided by data science.
When the Fast Cycle discovers new patterns with no known level, we expose those novel patterns to a guided learning tool so that researchers can create and test hypotheses to explain the patterns we observe. Learning from the guided research tool makes this type of labeling pro-active and real-time, resulting in faster pattern recognition and more accuracy than reactive data sciences.
Relational AI has a Decentralized Architecture. It is enabled by our hyperbolic 3-manifold design, Geometric Associative Memory, and Adaptive System Reasoning (see related tabs to understand benefits and how it works). It continuously improves explanations of cause and effect by filtering out noise. Relational AI may be run as a managed service or decentralized onto an embedded runtime environment. Relational AI is an upstack layer engine that glues existing IoT devices and AI into a complex adaptive system.
How It Works
Ipvive’s unsupervised graph sparsification uses decentralization to improve the efficiency of edge processing and secure personalization. With billions of people constantly producing more and more data, decentralized processing with embedded-A.I. in IoT devices (“edge processing”) can help avoid highly risky and expensive centralized data processing. Our Relational AI vector run-time cycle operates on the edge, which means we can leave data on the device instead of moving it to our servers. This means faster results and more data privacy. In addition, we use unsupervised graph management at the individual and group levels to securely check identity, provide value in exchange for data, and improve the speed and efficiency of transactions by using decentralized ledger technology.