中文 | English
http://43.138.157.47:8013/dashboard 账号: poc 密码:123456
首页
数据集成
元数据管理
元数据拾取
应用分析
系统菜单管理
元数据管理
数据质量
数据市场
数据标准
BI报表
数据资产
流程编排
参考Resource/FlinkDDLSQL.sql
CREATE TABLE data_gen (
amount BIGINT
) WITH (
'connector' = 'datagen',
'rows-per-second' = '1',
'number-of-rows' = '3',
'fields.amount.kind' = 'random',
'fields.amount.min' = '10',
'fields.amount.max' = '11');
CREATE TABLE mysql_sink (
amount BIGINT,
PRIMARY KEY (amount) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:mysql://localhost:3306/test_db',
'table-name' = 'test_table',
'username' = 'root',
'password' = '123456',
'lookup.cache.max-rows' = '5000',
'lookup.cache.ttl' = '10min'
);
INSERT INTO mysql_sink SELECT amount as amount FROM data_gen;
获取结果
1、Flink血缘构建结果-表:
[LineageTable{id='4', name='data_gen', columns=[LineageColumn{name='amount', title='amount'}]},
LineageTable{id='6', name='mysql_sink', columns=[LineageColumn{name='amount', title='amount'}]}]
表ID: 4
表Namedata_gen
表ID: 4
表Namedata_gen
表-列LineageColumn{name='amount', title='amount'}
表ID: 6
表Namemysql_sink
表ID: 6
表Namemysql_sink
表-列LineageColumn{name='amount', title='amount'}
2、Flink血缘构建结果-边:
[LineageRelation{id='1', srcTableId='4', tgtTableId='6', srcTableColName='amount', tgtTableColName='amount'}]
表-边: LineageRelation{id='1', srcTableId='4', tgtTableId='6', srcTableColName='amount', tgtTableColName='amount'}
1、BUSINESS FOR ALL DATA PLATFORM 商业项目
2、BUSINESS FOR ALL DATA PLATFORM 计算引擎
3、DEVOPS FOR ALL DATA PLATFORM 运维引擎
4、DATA GOVERN FOR ALL DATA PLATFORM 数据治理引擎
5、DATA Integrate FOR ALL DATA PLATFORM 数据集成引擎
6、AI FOR ALL DATA PLATFORM 人工智能引擎
7、DATA ODS FOR ALL DATA PLATFORM 数据采集引擎
8、OLAP FOR ALL DATA PLATFORM OLAP查询引擎
9、OPTIMIZE FOR ALL DATA PLATFORM 性能优化引擎
10、DATABASES FOR ALL DATA PLATFORM 分布式存储引擎
set execution.checkpointing.interval=15sec;
CREATE CATALOG alldata_catalog WITH (
'type'='table-store',
'warehouse'='file:/tmp/table_store'
);
USE CATALOG alldata_catalog;
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED, cnt BIGINT
);
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen', 'fields.word.length' = '1'
);
INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word;
-- POC Test OLAP QUERY
SET sql-client.execution.result-mode = 'tableau';
RESET execution.checkpointing.interval;
SET execution.runtime-mode = 'batch';
SELECT * FROM word_count;
-- POC Test Stream QUERY
-- SET execution.runtime-mode = 'streaming';
-- SELECT
interval
, COUNT(*) AS interval_cnt FROM-- (SELECT cnt / 10000 AS
interval
FROM word_count) GROUP BYinterval
;
### 2、Dlink启动并运行成功
### 3、OLAP查询
4.1 Stream Read 1
> 4.2 Stream Read 2
Component | Description | Important Composition |
---|---|---|
aiStudio | AI STUDIO FOR ALL DATA PLATFORM artificial intelligence engine | 人工智能引擎 |
aiStudioTasks | AI STUDIO TASKS FOR ALL DATA PLATFORM MLAPPS Engine | 人工智能模型任务 |
assembly | WHOLE PACKAGE BUILD FOR ALL DATA PLATFORM assembly engine | 整包构建引擎 |
buried | BURIED FOR ALL DATA PLATFORM data acquisition engine | 埋点解决方案 |
buriedShop | BURIED SHOP FOR ALL DATA PLATFORM commerce engine | 多端商城 |
buriedTrade | BURIED TRADE FOR ALL DATA PLATFORM commerce engine | 商业系统 |
crawlerData | CRAWLER DATA TRADE FOR ALL DATA PLATFORM commerce engine | 爬虫任务 |
crawlerPlatform | CRAWLER PLATFORM FOR ALL DATA PLATFORM commerce engine | 爬虫引擎系统 |
dataOlap | OLAP FOR ALL DATA PLATFORM OLAP query engine | 混合OLAP查询引擎 |
dataSync | DATA Integrate FOR ALL DATA PLATFORM Data Integration Engine | 数据集成引擎 |
dataSRE | DATA SRE FOR ALL DATA PLATFORM OLAP query engine | 智能大数据运维引擎 |
deploy | DEPLOY FOR ALL DATA PLATFORM OLAP query engine | 安装部署 |
documents | DOCUMENT FOR ALL DATA PLATFORM OLAP query engine | 官方文档 |
govern | DATA GOVERN FOR ALL DATA PLATFORM Data Governance Engine | 数据治理引擎 |
oneHub | ONE HUB FOR ALL DATA PLATFORM ONE HUB Engine | AllData总部前后端解决方案 |
oneLake | ONE LAKE FOR ALL DATA PLATFORM ONE LAKE engine | 数据湖引擎 |
studioSystem | STUDIO SYSTEM FOR ALL DATA PLATFORM DEVELOP IDE ENGINE | 大数据流批计算平台 |
studioTasks | STUDIO TASKS FOR ALL DATA PLATFORM Data Task Engine | 大数据流批计算任务 |
docs | Document | 文档 |
AllData | AllData社区项目通过二开大数据生态组件,以及大数据采集、大数据存储、大数据计算、大数据开发来建设一站式大数据平台 | Github一站式开源大数据平台AllData社区项目 |
1、AllData前端解决方案
oneHub/eladmin-web
2、AllData后端解决方案
oneHub/eladmin
3、多租户运维平台前端
oneHub/tenant
4、多租户运维平台前端
oneHub/tenantBack
S3 Hudi成功写入
AllData is one of the few open source big data platform projects on Github. It will develop into a successful solution to solve a series of problems in big data e-commerce scenarios. It will also become a general big data base for other developers to use and Contribution, my original intention is to create a product that is useful to society.
商城前台:
mall-shopping-app: 商城App
mall-shopping-app-service: 商城App服务
mall-shopping-wc: 商城小程序
mall-shopping-mobile: 商城前台
mall-shopping-pc: 商城pc端
pcAdminService: 商城pc端服务
mobileService: 商城前台服务(小程序和前台接入此接口)
商城后台:
mall-admin-web: 商城后台
pcAdminService: 商城后台服务
log-collect-server:
服务端日志收集系统
log-collect-client:
支持各app集成的客户端SDK,负责收集app客户端数据;
data-import-export:
基于DataX实现数据集成(导入导出)
data-spider:
爬虫平台支持可配置的爬取公网数据的任务开发;
分布式文件系统:hdfs
分布式数据库:hbase、mongodb、elasticsearch
分布式内存存储系统:redis
compute-mr(离线计算): Hive、MR
compute-realtime(流计算): storm、flink
multi-dimension-analysis(多维度分析): kylin, spark
task-schedular: 任务调度
task-ops: 任务运维
data-face: 数据可视化
data-insight: 用户画像分析
system-recommender: 推荐
system-ad: 广告
system-search: 搜索
system-anti-cheating: 反作弊
system-report-analysis: 报表分析
system-elk: ELK日志系统,实现日志搜索平台
system-apm: skywalking监控平台
system-deploy: k8s,scala,playframework,docker打包平台。
job-schedule: 任务提交平台
10.1 启动前,打包dubbo-servie项目,进入dubbo目录,
执行mvn clean package -DskipTests=TRUE打包,然后执行mvn install.
10.2 启动dubbo项目,配置tomcat端口为8091
10.3 启动商城项目的多个子系统
10.3.1、前端:启动mall-admin-web项目,进入项目目录,执行npm install,然后执行npm run dev;
10.3.2、后端:启动pcAdminService/mall-admin-search项目,
配置tomcat端口为8092,接着启动pcManage项目,tomcat端口配置为8093;
前台:小程序手机预览,移动端访问:http://localhost:6255
10.3.3、小程序和移动端
10.3.3.1、前端:商城小程序,启动mall-shopping-wc项目,
安装微信开发者工具,配置开发者key和secret,
使用微信开发者工具导入即可,然后点击编译,可以手机预览使用。
10.3.3.2、前端:商城移动端,启动mall-shopping-mobile,
进入项目目录,执行npm install和npm run dev;
10.3.3.3、后端:小程序和移动端用的是同一个后台服务,
启动mobileService项目,进入项目目录,配置tomcat端口8094
10.3.4、商城PC端 访问http://localhost:8099
10.3.4.1、前端:启动mall-shopping-pc项目,
进入项目目录,执行npm install和npm run dev;
10.3.4.2、后端:启动pcAdminService项目,配置tomcat端口为8095;
11.1 容器化部署system-deploy
11.2、自动化运维平台system-devops
11.3、使用Kong作为调用中心网关入口system-api-gateway
11.4、日志中心system-elk
11.5、告警平台system-alarm-platform
11.6 监控系统
11.7 数据采集
11.8 数据展示
11.9 监控中心system-apm
11.10 使用Apollo作为配置中心system-config