apache / incubator-hugegraph

A graph database that supports more than 100+ billion data, high performance and scalability (Include OLTP Engine & REST-API & Backends)

Home Page:https://hugegraph.apache.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Feature] G6VP - A Graph Visualization Platform, has supported datasource from HugeGraph !

Yanyan-Wang opened this issue · comments

Feature Description (功能描述)

Hi,亲爱的 HugeGraph 团队与用户:
Hi, Dear HugeGraph team and users:

首先,非常感谢 HugeGraph 团队为社区贡献了如此美好的产品!
First of all, we are so appreciate that HugeGraph Team has created such a good product to the community!

我们是蚂蚁集团 AntV 图可视化小组。我们在去年开源了一款基于 AntV G6G6 官网)的图可视化与分析平台 G6VP。用户可以在这个平台上连接自己的数据,包括本地文件上传,或多种图数据库。然后在这个平台上使用我们提供的丰富的图可视化与分析资产,进行数据的分析。并且,G6VP 支持用户自由地、交互式地组装不同的数据源、图可视化与分析资产以设计和产出自己的图可视分析产品,甚至嵌入到自己的系统当中。

We are members of AntV Graph Visualization team of Ant Group. We have opened source a graph vis platform based on AntV G6 (G6's website)named G6VP, where users could connect to their own datasource with uploaded files or several graph databases, and then analyze the graph data with powerful graph visualization and analysis assets provided by G6VP. What's more, users could interactively integrate the analysis assets / components provided by G6VP to create their own graph applications and output SDK or HTML for embedding into their systems.

现在,G6VP 支持了 HugeGraph 作为一个重要的数据源!非常感谢 HugeGraph 团队的答疑和支持!

Now, we have supported HugeGraph as one important datasource on G6VP! Thanks for the supports from HugeGraph team!

下面是在 G6VP 上使用 HugeGraph 作为数据源的步骤:

Here goes the steps to connect HugeGraph on G6VP:

如果您是 HugeGraph 的用户,前面的几步应该已经非常数据,您可以从第四步开始看。

If you are the user of HugeGraph, some steps will be known to you. If you have the HugeGraph graph on your local, Please skip step 1-3 and start form step 4.

Step 1: 安装 Docker Desktop / Install Docker Desktop

Docker Desktop 是适用于 Mac、Linux 或 Windows 环境的一键式安装应用程序,使您能够构建和共享容器化应用程序和微服务。到 Docker 官网上,根据您电脑的环境,下载不同的安装包:

Docker Desktop is an application for building and sharing container application and micro services for Mac, Linux, and Windows. Go to the website of Docker and download the application for your environment:

下载完后,使用安装包进行安装,下面以 Mac 为例:

After that, install the application, here goes the example on Mac:

  • 双击 Docker.dmg 进行安装 / Double click Docker.dmg to install
  • 完成安装后,双击 Docker.app 启动 Docker Desktop,将看到下面界面 / Complete the installation, and then double click Docker.app the start Docker Desktop, you will see the interface:

image

Step 2: 安装并启动 HugeGraph Docker 镜像 / Install and start HugeGraph Docker Image

根据这里的介绍,在终端执行下面命令,以安装 HugeGraph Docker :

According to the introduction of HugeGraph Docker, run the command in your terminal

docker pull hugegraph/hugegraph

执行下面命令,启动 HugeGraph Docker 自带的样例图服务,名为 'hugegraph':

And then run the command to start HugeGraph Docker and its sample service named 'hugegraph':

docker run -itd --name=graph -p 18080:8080 hugegraph/hugegraph

此时,在您的 Docker Desktop 的 Containers 中应该可以看到 'graph' 容器,它的端口(Port)是 18080:8080,如果它此时尚未运行(Actions 显示播放 icon),则点击该 icon 使其运行(icon 变为停止 icon):

Now, you will see the 'graph' container in your Docker Desktop with port 18080:8080. If it is not running (with play icon), click the play icon to run it.

image

现在,浏览器访问 http://localhost:18080/,看到下面内容,则表示运行成功:

Now, visit http://localhost:18080/, if the page looks like this, the docker is started successfully:

image

Step 3: 导入数据 / Load data into HugeGraph

这里下载 HugeGraph-Toolchain,并解压:

Download the toolchain of HugeGraph here and decompress it.

image

解压后的 apache-hugegraph-loader-incubating-1.0.0 文件夹中是 HugeGraph 官方提供的数据导入工具。在 apache-hugegraph-loader-incubating-1.0.0/example 文件夹中有多个 HugeGraph 提供的样例数据,可依照该格式整理数据后导入。导入详细步骤见:https://hugegraph.apache.org/cn/docs/quickstart/hugegraph-loader/

After that, you will find a folder named 'apache-hugegraph-loader-incubating-1.0.0', it is the data loading tool provided by HugeGraph. And you could find several example data in 'apache-hugegraph-loader-incubating-1.0.0/example'. You could test with the example data or prepare your own data according to the format as the examples. The importing steps will be found in: https://hugegraph.apache.org/cn/docs/quickstart/hugegraph-loader/

image

Step 4: 启动 G6VP 服务 / Start The Service of G6VP

到这一步,我们假设您已经完成了上面几步的 HugeGraph 服务启动和数据导入。

In this step, we suppose that you have already accomplished the HugeGraph Service starting and data importing.

由于目前 HugeGraph 本地服务暂时无法通过网页直接跨域请求,因此我们需要启动本地的 G6VP 及其 http server,通过 G6VP 的 BFF 层进行请求。

Since the local HugeGraph service does not support cross-origin requests through web pages, so you have to start G6VP http server, and then the requests will be transmitted by G6VP's BFF server.

克隆 G6VP 代码:

Clone the code of G6VP from GitHub by the running command in your terminal:

git clone https://github.com/antvis/G6VP.git

终端进入 G6VP/packages/gi-httpservice 文件夹

Enter the folder 'G6VP/packages/gi-httpservice' in your terminal:

cd {The path where the cloned G6VP source code at}/G6VP/packages/gi-httpservice

安装依赖

Install the dependencies:

npm install

启动 httpservice

Start the service:

npm run dev

此时,G6VP HTTP 服务启动完成,访问 http://127.0.0.1:7001 ,(默认端口7001),可以在控制台看到下面提示:

Now, G6VP http service is running, visit http://127.0.0.1:7001, (7001 is the default port), you will see the tips in the console of browser:

image

Step 5: 连接 G6VP / Connect HugeGraph

进入 G6VP 站点的数据导入模块 https://insight.antv.antgroup.com/#/dataset/create?type=GRAPH
选择「图数据库」下的「HugeGraph」

Visit the website of G6VP, and go to the data importing module, at: https://insight.antv.antgroup.com/#/dataset/create?type=GRAPH

在表单中填写下面信息:

  • 代理地址:即 Step 4 中启动的 G6VP HTTP 服务地址,默认端口为 7001,若是本地启动的,则该服务地址为:http://127.0.0.1:7001
  • 引擎地址:即 Step 2 中启动的 HugeGraph 服务地址,默认端口是 18080,若事本地启动的,则该服务地址为:http://127.0.0.1:18080
  • 用户名与密码:如果你没有设置,可为空

Select HugeGraph on the left, and fill the form:

  • Agency Address: It is the G6VP http service address started at step 4, the default port is 7001. If you start it at local, the address will be: http://127.0.0.1:7001
  • Engine Address: It is the HugeGraph service address started at step 2, the default port is 18080. If you start it at local, it will be: http://127.0.0.1:18080
  • Username and password: If you have not set up username and password, leave them as empty.

image

点击「开始连接」,若成功连接,右上方将出现下图提示,下方将出现「选择子图」 panel:

Click 「开始连接」(Connect) button and you will see the tip if it is success, and the subgraph select panel will be shown at below:

image

「选择子图」的下拉框中列举了该 HugeGraph Docker 服务中的所有子图名称,选择其中一个,然后在「数据名称」中为当前创建的 G6VP 数据集起一个名字,下图例子中起名为“hugegraph-dataset2”:

Select a subgraph from 「选择子图」(Select Subgraph) . This select dropdown has listed all the graphs in the HugeGraph service you started. If there is no option, please checkout if the data importing step is failed. Name the dataset.

image

image

再点击「进入分析」,即可创建数据集成功,页面将跳转到「数据集」模块,您将在数据集列表中看到刚刚创建的名为“hugegraph-dataset2” G6VP 数据集:

Click 「进入分析」(Analyze), the page will jump to the list of the datasets, and you will find the dataset you just created:

image

点击对应数据集右侧的蓝色的电脑 icon,即使用该数据集创建工作簿,此时,页面跳转到新建工作簿页面,数据集和模版已为您填充好,在「工作簿名称」输入框为该该工作簿起一个名称,然后点击「创建画布」,即可进入工作簿。

Click the blue computer icon on the right of the dataset record in the list to create a workbook with the dataset. You will see the page below and the dataset field is already filled and please name the workbook and then click 「创建画布」(Create Canvas) .

image

点击「创建画布」后,您将看到一个空白的画布,以及左侧的配置面板:

After that, a workbook is successfully created and you will see an empty canvas with configuration panel like this:

image

Step 6: Analyze the Data

此时,您已经完成了 HugeGraph 数据库的连接和数据、工作簿的创建,可以在刚刚新建的工作簿中进行数据分析了。在工作簿中配置 Gremlin 查询资产,输入 gremlin 查询语句,例如下图中的 g.V().limit(10),可以成功查询到十个节点:

Now, you have completed the database connecting and workbook creating. Here goes the Graph Visualization and Analysis! Enter the Gremlin code and query:

image

如果目前节点和边的样式不符合您的要求,可以在「样式」中进行配置:

If you are not satisfy with the nodes and edges' styles, configuring it on the style panel on the left:

image

单击一个节点,可查询其详情,并显示在属性面板中:

Click one node to see the property panel:

image

选择一个或多个节点,在节点的右键菜单中,选择「一度扩展」,进行该节点的邻居查询:

Select one or brush multiple nodes and right click to expand the neighbors:

image

扩展结果:

The result after expanding:

image

如果这个邻居扩散资产不符合需求,可以在左侧面板进行配置:

Configure the neighbor querying asset on the left if it does not meet the requirement:

image

使用过滤取来分析统计数据,我们提供了过滤器的智能推荐:

Use the filter to analyze the statistical info, and there will be an intelligent recommend:

image

上面仅仅介绍了最为基础的功能。还有大量超酷的图分析资产可以在资产中心找到:

The introduction above is only about the basic functions. Plenty of fancy graph analysis assets will be found at the assets center:

image

集成这些资产到不同的容器中,来设计和建设您的图应用:

Integrate them into different containers to build up your own applications:

image

使用它们进行分析!

Use them to analyze!

image

最后,别忘了保存刚刚配置完成的工作簿:

Last but not least, do not forget to save your workbook after the above operations:

image

次进入 G6VP,就可以在工作簿列表中找到这个工作簿了。别担心,所有的数据信息、工作簿配置信息都存储在您的电脑本地缓存,G6VP 不会获取任何用户数据信息

Next time you visit G6VP, you will found your workbook at the workbook list. Don't be worry, the datasets info, workbooks are all cached at your computer. G6VP will not record any user information!

image

如果您想将这个工作簿嵌入到您的系统当中,可以点击右上角的「开放」,有三种方式可以导出:

If you want to embed the workbook into your system, export it at the right top 「开放」(Export). There will be 3 ways to export:

image

更多的用法和文档可以访问:https://www.yuque.com/antv/gi

More usage and docs can be referred to: https://www.yuque.com/antv/gi

如果您有任何问题,都可以到我们的 GitHub 仓库中提 issue: http://github.com/antvis/g6vp

If you have any questions, create issues at our GitHub Repo: http://github.com/antvis/g6vp

您的 Star 就是给我们开源最大的鼓励

We will be appreciate if you star our repo for encouragement:

祝您开心~

Have a nice day~

Fascinating Integrations !!!

Congratulations~

Nice to hear the news😁 I'll transfer it to an "Announcement discussion"

(we will test & add it in our readme file or official website soon)

hi @Yanyan-Wang , when I configure the connection, I get this exception
image
image

Seems G6VP HTTP Service and hugegraph-server are running normally
image
image