Interview: Dr. Yu Xu, CEO and Founder of TigerGraph

Print Friendly, PDF & Email

I recently caught up with Dr. Yu Xu, CEO and Founder of TigerGraph, to discuss the genesis of graph analytics, how the technology has evolved over time, how it’s being used today, along with a sense for where the graph market is headed. Dr. Yu Xu is the founder and CEO of TigerGraph, the world’s first native parallel graph database. Dr. Xu received his Ph.D. in Computer Science and Engineering from the University of Califoria San Diego. He is an expert in big data and parallel database systems and has 26 patents in parallel data management and optimization. Prior to founding TigerGraph, Dr. Xu worked on Twitter’s data infrastructure for massive data analytics. Before that, he worked as Teradata’s Hadoop architect where he led the company’s big data initiatives.

Daniel D. Gutierrez – Managing Editor, insideBIGDATA

insideBIGDATA: What is graph analytics? What are the main value propositions of this technology?

Dr. Yu Xu: Today, companies are demanding real-time data to make informed decisions and to provide better customer experiences. Graph analytics are optimized to deliver new insight and intelligence previously impossible or hard to detect, allowing enterprises to capture key business moments for competitive advantage.

When data are modeled and represented in a graph structure, as nodes and the set of connections (edges) between those nodes, new perspectives are revealed. Graph analytics leverages connections between data to better reveal patterns, non-obvious relationships, correlations and sequences.

As a result, graph analytics are key to providing insight into key business moments to derive meaning from connected data for immediate and measurable business outcomes. This can range from an additional widget sold using a real-time recommendation engine to identifying a transaction anomaly – whether it’s caused by a fraudster or a microservice missing the SLA requirement.

Graph analytics can be aggregational in terms of assessing the graph as a whole. It can also be explorational, investigating the neighborhood around some starting points. In turn graph analytics offers overwhelming potential to help unveil new business opportunities for creation of new value chains – achieved by using ad hoc queries rather than by needing to write code.

insideBIGDATA: How has graph technology evolved over time?

Dr. Yu Xu:Graph databases featuring storage of attributes as nodes or edges (“property graphs”) first arose around the start of the century. More recent interest in graphs grew with the rise of the internet, online social networks and e-commerce, as network science was hot and people needed tools. We saw the definition of the Semantic Web, proposal of the RDF data format, and emergence of the SPARQL query language. Eventually, RDF/Knowledge graph application programs were developed to address RDF standards.

The Neo4j graph database began in 2002 before being launched as a product in 2010. The graph database Titan began as an open-source project created by building a graph layer on top of distributed storage/key value systems (such as Berkeley DB, HBase, and Cassandra). Apache GraphX and Giraph both present users with a graph interface, while data are managed in a more generic NoSQL format. In other cases, object-oriented databases have been repositioned as graph databases. However, early designs focused on functionality with limited query language capabilities.

Development of graph visualization evolved separately with open-sources systems including Graphviz, Cytoscape and Gephi.

Today, enterprises expect support for growing amounts of Big Data, real time capabilities and Software as a Service (SaaS). Platforms like TigerGraph that are fast, scalable and cloud-ready represent the latest generation of graph technology and will move developments in graph analytics forward.

insideBIGDATA: How is graph used for e-commerce/customer intelligence, anti-fraud detection, supply chain intelligence and other areas?

Dr. Yu Xu: An ideal use case for graph database technology is Customer 360 or Know-Your-Customer applications, enabling businesses to derive intelligence from their customers. In the graph model, a customer becomes a hub with links to items, places, organizations, documents, other people and more. Every piece of intelligence around a customer can be modeled in the graph, and many enterprises supplement internally gathered customer data with public information for a more holistic or 360 look.

Using this enhanced insight with a graph, business can easily answer questions like: Which of my customers like the same sports team? Where do those customers shop? The possibilities are endless. Such insight can play into more specific use cases. In e-commerce for example, it is usual to provide customers with product recommendations. The better you understand a customer’s likes and dislikes and their behavioral patterns, the better you are able to automatically offer them personalized recommendations that make sense.

In the case of anti-fraud applications, graphs provide a new, more powerful way to gain insight between individual acts and associated connections to identify fraudulent activity. Known fraud cases can then be used as the training data for supervised learning to inform a graph-based set of fraud detection rules. More advanced machine learning can interpret data in terms of risk to activate a risk-based mode when needed. This is analogous to the trust-based models that social scientists are using in network analytics.

Graph are also a natural model for supply chains, providing real-time visibility and analytics into key supply chain operations including order management, shipment status, and other logistics. Organizations can rapidly model their supply chain functions and business processes in real time, injecting variables such as supply disruptions, link breakages, or price changes. Graph analysis and pattern recognition is used to identify product delays, shipment status, and other quality control and risk issues. Such fast, real-time insight enables organizations to optimize orders and shipping routes, and also quickly respond to changing demand patterns as events unfold.

insideBIGDATA: Where is the graph market headed?

Dr. Yu Xu: Along with more integration and better usability, the graph market will see new and improved collaboration in the form of MultiGraph services, where multiple groups can share one master database for access to real-time updates and collaboration. Our just launched TigerGraph 2.0 graph analytics platform offers a breakthrough here by providing local control and security features to help enterprises meet compliance regulations, including GDPR. By breaking down data silos and improving team transparency and access to data, the result is enhanced business productivity.

The graph database market will also see new technical enhancements. This includes support for Real Time Deep Link Analytics, achieved utilizing three to 10+ hops of traversal across a big graph, along with fast graph traversal speed and data updates. This feature supports enterprises’ need for real-time graph analytic capabilities that can explore, discover and predict very complex relationships, offering a truly transformative technology particularly for organizations with colossal amounts of data.

Such developments from TigerGraph and possibly other vendors in the future is poised to revolutionize the potential of graph analytics to provide even more business insight and value across teams and markets. The door will open wider for even more use cases.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*