lzh2nix / articles

用 issue 来管理个人博客

Home Page:https://github.com/lzh2nix/articles

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

prometheus alertmanager的设计与实现

lzh2nix opened this issue · comments

commented

序言

源码分析不用一上来就直接扎入源码中, 可以先看下提供了哪些核心功能, 理一理核心功能点, 然后根据不同的功能点去看. 了解一个产品的功能莫过于他的官网了 https://prometheus.io/docs/alerting/latest/alertmanager/, 在官网上这样说明:

The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

Pasted image 20230520071205

这样简单理一个架构图出来, 然后慢慢去完善这个设计图, 告警产生模块只做告警的生产, 对一些告警的处理(dedup, group,inhibition,silence, route)逻辑都放到了这里. 所以我们看源码的话是就可以按照这些点来看.

  • 有多少个常驻的goroutine,以及进程模型是什么样的?
  • 对外提供点API啥怎么样的?
  • 告警抑制是如何实现的?
  • 告警静默是如何实现的?
  • 告警合并是如何实现的?
  • 这个服务是如何做高可用的?
    我们就带这样的6个问题开始alermanger的学习.

进程模型以及一条告警的生命周期

进程模型

先把man.go里执行的核心逻辑过一遍(类似读一本书前先过一遍目录,心里大概有个底), 基本就能确定后台有哪些常见goroutine 了.
Pasted image 20230527074758

一条告警的生命周期

有生命周期的前提是首先得有一条告警产生, 在简介里我们也看到 alertmanager 通过对外提供的接口来创建告警, 核心代码如下:

  1. api 接收到alert 并将alert 放到一个内存中, 然后通知了listeners(通过订阅等待新的告警的产生)
func (api *API) postAlertsHandler(params alert_ops.PostAlertsParams) middleware.Responder {
if err := api.alerts.Put(validAlerts...); err != nil {
	level.Error(logger).Log("msg", "Failed to create alerts", "err", err)
	return alert_ops.NewPostAlertsInternalServerError().WithPayload(err.Error())
	}
}
func (a *Alerts) Put(alerts ...*types.Alert) error {
	for _, alert := range alerts {
		if err := a.alerts.Set(alert); err != nil {
			level.Error(a.logger).Log("msg", "error on set alert", "err", err)
			continue
		}
		a.mtx.Lock()
		// 这里是关键, dispatcher 已经注册在了这里
		for _, l := range a.listeners {
			select {
			case l.alerts <- alert:
			case <-l.done:
			}
		}
		a.mtx.Unlock()
	}
	return nil
}
  1. 之后就交给了Dispatcher 进程处理了
func (d *Dispatcher) run(it provider.AlertIterator) {
	cleanup := time.NewTicker(30 * time.Second)
	defer cleanup.Stop()

	defer it.Close()

	for {
		select {
		case alert, ok := <-it.Next(): // 告警产生了
			for _, r := range d.route.Match(alert.Labels) {
				d.processAlert(alert, r)
			}
		case <-d.ctx.Done():
			return
		}
	}
}
// processAlert determines in which aggregation group the alert falls
// and inserts it.
func (d *Dispatcher) processAlert(alert *types.Alert, route *Route) {
	groupLabels := getGroupLabels(alert, route)
	ag, ok := routeGroups[fp]
	if ok {
		ag.insert(alert)
		return
	}
	ag = newAggrGroup(d.ctx, groupLabels, route, d.timeout, d.logger)
	routeGroups[fp] = ag
	d.aggrGroupsNum++
	d.metrics.aggrGroups.Inc()

	// Insert the 1st alert in the group before starting the group's run()
	// function, to make sure that when the run() will be executed the 1st
	// alert is already there.
	ag.insert(alert)
	
	go ag.run(func(ctx context.Context, alerts ...*types.Alert) bool {
		_, _, err := d.stage.Exec(ctx, d.logger, alerts...)
		return err == nil
	})
}

这里的核心就在 ag.run 中的 d.stage.Exec 中, 而d.stage又是在创建dispatcher 的时候构建的.

pipeline := pipelineBuilder.New(
			receivers,
			waitFunc,
			inhibitor,
			silencer,
			timeIntervals,
			notificationLog,
			pipelinePeer,
		)
disp = dispatch.NewDispatcher(alerts, routes, pipeline, marker, timeoutFunc, nil, logger, dispMetrics)

之后就是pipeline 上每个stage的的处理了, 这个也是后面几个模块的重点, 后面在详细道来.

对外API

api, err := api.New(api.Options{
Alerts: alerts,
Silences: silences,
StatusFunc: marker.Status,
Peer: clusterPeer,
Timeout: *httpTimeout,
Concurrency: *getConcurrency,
Logger: log.With(logger, "component", "api"),
Registry: prometheus.DefaultRegisterer,
GroupFunc: groupFn,
})

以上是在main中对api server的初始化,上面可以看出API依赖性的几个组件是

openAPI.AlertGetAlertsHandler = alert_ops.GetAlertsHandlerFunc(api.getAlertsHandler)
openAPI.AlertPostAlertsHandler = alert_ops.PostAlertsHandlerFunc(api.postAlertsHandler)
openAPI.AlertgroupGetAlertGroupsHandler = alertgroup_ops.GetAlertGroupsHandlerFunc(api.getAlertGroupsHandler)
openAPI.GeneralGetStatusHandler = general_ops.GetStatusHandlerFunc(api.getStatusHandler)
openAPI.ReceiverGetReceiversHandler = receiver_ops.GetReceiversHandlerFunc(api.getReceiversHandler)
openAPI.SilenceDeleteSilenceHandler = silence_ops.DeleteSilenceHandlerFunc(api.deleteSilenceHandler)
openAPI.SilenceGetSilenceHandler = silence_ops.GetSilenceHandlerFunc(api.getSilenceHandler)
openAPI.SilenceGetSilencesHandler = silence_ops.GetSilencesHandlerFunc(api.getSilencesHandler)
openAPI.SilencePostSilencesHandler = silence_ops.PostSilencesHandlerFunc(api.postSilencesHandler)

告警合并

所谓告警合并就是将相关联的告警合并到一起通知. 尤其是在很严重的故障发生之后会有一堆的告警出来(比如磁盘坏了, 可能会导致相应5xx, 以及请求响应变慢),这种场景下告警很容易刷屏. 从接警人的视角来说一条告警就够了.
在看具体的告警合并之前来简单看下 alertmanager 告警处理的route机制. 在config 文件中是这样配置 route的:

#The root node of the routing tree.
route: <route>

这里的route 可以不断的嵌套, 当一个告警产生之后从根上开始匹配, 如果匹配到了就看是否需要check其他兄弟节点(是否设置了continue),如果所有的子节点都无法处理, 就交给根节点处理. 以下是一个具体的配置

#The root route with all parameters, which are inherited by the child
#routes if they are not overwritten.
route:
  receiver: 'default-receiver'
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h
  group_by: [cluster, alertname]
  # All alerts that do not match the following child routes
  # will remain at the root node and be dispatched to 'default-receiver'.
  routes:
  # All alerts with service=mysql or service=cassandra
  # are dispatched to the database pager.
  - receiver: 'database-pager'
    group_wait: 10s
    matchers:
    - service=~"mysql|cassandra"
  # All alerts with the team=frontend label match this sub-route.
  # They are grouped by product and environment rather than cluster
  # and alertname.
  - receiver: 'frontend-pager'
    group_by: [product, environment]
    matchers:
    - team="frontend"

对告警合并来收两个关键字段是 group_by 和 group_wait 了.

  • group_by: 按照那些 label 进行合并
  • group_wait: 合并通知前等待多久(等其他的告警出来)
    告警合并相关的逻辑都在 dispatch.go中:
    Pasted image 20230523071311

告警抑制

所谓告警抑制就是在A告已经警触发的情况下, 抑制B告警的产生. 这种情况是A是一个比较严重的告警, 此时解决A告警B也就自然解决了. 这里已经触发的告警叫source, 等待触发的告警时target. 当target 和source label matcher的时候就抑制这个告警.

#A list of matchers that have to be fulfilled by the target
# alerts to be muted.
target_matchers:
  [ - <matcher> ... ]
# A list of matchers for which one or more alerts have
# to exist for the inhibition to take effect.
source_matchers:
  [ - <matcher> ... ]

实现部分都在 inhibit/inhibit.go中, 再次感叹模块的拆分真的是好. 首先是Subscribe新的告警然后构建和source_matcher 匹配的告警:
Pasted image 20230525071535

构建好本地scache之后就是在 pipeline里检查该告警是否需要mute, mute告警直接alerts里去掉(pipeline的各个阶段就是再不断的更新alerts)
Pasted image 20230525072047
Pasted image 20230525072723

告警静默

所谓静默就是check到告警本身的label 和设置的静默 label 一致的话mute一段时间(时间在规则中已设定). 有上面的代码基础之后静默相关的代码看着就很快, 首先在 New Pipeline中设置静默的Stage.
Pasted image 20230527063058

之后就和上面group的机制一样, 在pipeline exec 中执行mute 函数

func (s *Silencer) Mutes(lset model.LabelSet) bool {
	// 1. 找到匹配这个lset的silencer
	fp := lset.Fingerprint()
	activeIDs, pendingIDs, markerVersion, _ := s.marker.Silenced(fp)
	// 2. 重新计算sliencer Ids
	now := s.silences.nowUTC()
	for _, sil := range allSils {
		switch getState(sil, now) {
		case types.SilenceStatePending:
			pendingIDs = append(pendingIDs, sil.Id)
		case types.SilenceStateActive:
			activeIDs = append(activeIDs, sil.Id)
		default:
			// Do nothing, silence has expired in the meantime.
		}
	}
	// 更新marker中状态
	s.marker.SetActiveOrSilenced(fp, newVersion, activeIDs, pendingIDs)
}

告警分发

整体的告警分发是由dispatcher来串联, Sub从API产生的告警, 等收到一个新的alert之后, 先过一遍config中的route, 找到匹配的route.

for _, r := range d.route.Match(alert.Labels) {
	d.processAlert(alert, r)
}

之后就是通过label group 一下, 然后通过交给pipeline来处理.

go ag.run(func(ctx context.Context, alerts ...*types.Alert) bool {
_, _, err := d.stage.Exec(ctx, d.logger, alerts...)
}

这里的stage是NewPipeline时构建的, 整体pipeline 的构建代码如下:

// New returns a map of receivers to Stages.
func (pb *PipelineBuilder) New(
	receivers map[string][]Integration,
	wait func() time.Duration,
	inhibitor *inhibit.Inhibitor,
	silencer *silence.Silencer,
	times map[string][]timeinterval.TimeInterval,
	notificationLog NotificationLog,
	peer Peer,
) RoutingStage {
	rs := make(RoutingStage, len(receivers))

	ms := NewGossipSettleStage(peer)
	is := NewMuteStage(inhibitor)
	tas := NewTimeActiveStage(times)
	tms := NewTimeMuteStage(times)
	ss := NewMuteStage(silencer)

	for name := range receivers {
		st := createReceiverStage(name, receivers[name], wait, notificationLog, pb.metrics)
		rs[name] = MultiStage{ms, is, tas, tms, ss, st}
	}
	return rs
}

// createReceiverStage creates a pipeline of stages for a receiver.
func createReceiverStage(
	name string,
	integrations []Integration,
	wait func() time.Duration,
	notificationLog NotificationLog,
	metrics *Metrics,
) Stage {
	var fs FanoutStage
	for i := range integrations {
		recv := &nflogpb.Receiver{
			GroupName:   name,
			Integration: integrations[i].Name(),
			Idx:         uint32(integrations[i].Index()),
		}
		var s MultiStage
		s = append(s, NewWaitStage(wait))
		s = append(s, NewDedupStage(&integrations[i], notificationLog, recv))
		s = append(s, NewRetryStage(integrations[i], name, metrics))
		s = append(s, NewSetNotifiesStage(notificationLog, recv))

		fs = append(fs, s)
	}
	return fs
}

所以整体流程是:
Pasted image 20230527151122

高可用

alertmanager 是通过gossip来达到高可用的(一个server挂了其他server能继续使用服务), 相关实现代码在cluster/ 目录下.

	// 通知消息发送同步
	if peer != nil {
		// 从集群其他节点收到nfl消息的处理函数(Merge)
		c := peer.AddState("nfl", notificationLog, prometheus.DefaultRegisterer)
		// 广播消息
		notificationLog.SetBroadcast(c.Broadcast)
	}
	// 从api 设置的silences 同步
	if peer != nil {
		// 从集群其他节点收到sil消息的处理函数(Merge)
		c := peer.AddState("sil", silences, prometheus.DefaultRegisterer)
		// 广播的处理
		silences.SetBroadcast(c.Broadcast)
	}

底层的话是依赖 hashicorp 的memberlist
Pasted image 20230528060833

总结

里面个个组件设计的真好, 好几个实现都可以拆出来作为一个独立的包

  • 通过 pipeline来实现任务的编排, 这个在流程类代码中都可以使用.
  • 通过gossip 实现的cluster, 可以抽成一个单独的package,可以快速构建集群
  • 每个组件暴露合适的metric(回头看这些metric在设计阶段就要考虑到)
  • 超时重试再次看到backoff 算法