摘 要
伴随着大数据科技的发展和成熟,越来越多的企业和机构使用大数据来进行分析和决策。其主要的分析数据来源于日志文件,所以对日志文件的分析是很重要的也是很关键的步骤。
本系统实现的功能是,将日志信息生成、日志信息传送、日志信息分析,最后落地并可视化展示。完成的业务需求是统计课程TOPN信息,按照地市统计课程TOPN信息,按照流量统计TOPN信息。
系统从需求分析、结构设计、数据库设计,最后到系统实现,分别实现了数据采集、数据收集集群、消息队列、大数据集群、spark数据的处理和落地、java web从数据库读取数据并可视化的功能。本文从系统描述、系统分析、系统设计、系统实现和系统测试几个方面对系统进行了描述和开发。系统使用了大数据的各个框架来辅助完成数据采集和分析功能。系统使用了hadoop集群和spark混用的模式,日志采集使用了flume框架对日志进行采集处理,消息队列使用了kafka框架来搭建,使用zookeeper进行集群容错性管理。最后Spark集群上使用了SparkSQL来对大数据进行离线批处理。在可视化的过程中使用了echarts开源框架等技术进行实现。
关键字:sparkSQL;离线批处理;日志采集;mysql数据库;echarts
Abstract
With the development and maturity of big data technology, more and more enterprises and organizations are using big data to analyze and make decisions. The main analysis data comes from log files, so the analysis of log files is very important and a key step.
The function of this system is to generate log information, log information transfer, log information analysis, and finally landing and visualizing display. The completed business requirement is TOPN information of statistics course, according to TOPN statistics of local city statistics course, and TOPN information according to traffic statistics.
From demand analysis, structure design, database design, and finally to the system implementation, the system realizes data collection, data collection cluster, message queue, large data cluster, spark data processing and landing, and Java Web reads data from database and visualizes the data. This paper describes and develops the system from aspects of system description, system analysis, system design, system implementation and system testing. The system uses various frameworks of big data to assist in data acquisition and analysis. The system uses the model of Hadoop cluster and spark, the log collection uses the flume framework to collect and process the log, the message queue uses the Kafka framework to build, and uses zookeeper to manage the fault tolerance of the cluster. Finally, SparkSQL is used to cluster large data on Spark cluster. In the process of visualization, echarts open source framework is used to implement the technology.
Keywords: spark SQL; offline batch processing; log collection; MySQL database; echarts