问题分析
一台aws云主机先后crash了几次,查看系统dmes日志,排查crash原因
[789106.990754] Uhhuh. NMI received for unknown reason 21 on CPU 6.
[789106.990754] Do you have a strange power saving mode enabled?
[789106.990755] Kernel panic - not syncing: NMI: Not continuing
[789106.990755] CPU: 6 PID: 2644936 Comm: server Not tainted 4.14.81.bm.21-amd64 #1
[789106.990755] Hardware name: Amazon EC2 m5a.2xlarge/, BIOS 1.0 10/16/2017
[789106.990756] Call Trace:
[789106.990756] dump_stack+0x5c/0x85
[789106.990756] panic+0xe4/0x232
[789106.990756] ? printk+0x52/0x6e
[789106.990757] nmi_panic+0x35/0x40
[789106.990757] unknown_nmi_error+0x6f/0x80
[789106.990757] do_nmi+0xe5/0x130
[789106.990757] nmi+0x83/0xcc
结合内核源码查看发生的panic的地方:
源码地址: https://elixir.bootlin.com/linux/v4.7/source/arch/x86/kernel/nmi.c#L75
unknown_nmi_error(unsigned char reason, struct pt_regs *regs)
{
int handled;
/*
* Use 'false' as back-to-back NMIs are dealt with one level up.
* Of course this makes having multiple 'unknown' handlers useless
* as only the first one is ever run (unless it can actually determine
* if it caused the NMI)
*/
handled = nmi_handle(NMI_UNKNOWN, regs);
if (handled) {
__this_cpu_add(nmi_stats.unknown, handled);
return;
}
__this_cpu_add(nmi_stats.unknown, 1);
pr_emerg("Uhhuh. NMI received for unknown reason %02x on CPU %d.\n",
reason, smp_processor_id());
pr_emerg("Do you have a strange power saving mode enabled?\n");
if (unknown_nmi_panic || panic_on_unrecovered_nmi)
nmi_panic(regs, "NMI: Not continuing");
pr_emerg("Dazed and confused, but trying to continue\n");
}
从源码可以看到直接原因是nmi不可中断导致系统crash
那到底是什么原因导致nmi呢? 谷歌了一圈,众说纷纭,有说软件bug的,有说硬件bug的。
可能原因1: 软件bug
查找在crash的时间点左右的atop系统快照
atop -y -r atop_20210225_until10:27:29
能定位到该pid为业务进程game的子进程。有可能是该子进程异常导致了系统panic(只是怀疑)
不过我觉得用户空间的操作不太可能引起系统crash
可能原因2: 硬件问题
case1,主板电源的机器:
https://community.amd.com/t5/server-gurus-discussions/solved-uhhuh-nmi-received-for-unknown-reason/td-p/74321
After some googleing, it looks like it might be a RAM problem.
As it is a production server with FC2 (cannot run OMSA except
with OMSA Knoppix), I'd appreciate some hints on what to look at.
case2: redhat官方的描述
当中断被禁用或指示CPU忽略该中断时,该中断被称为屏蔽。阿非屏蔽中断(NMI)不能被忽略,并且通常只用于关键硬件错误。
综合,以上原因,分别从不同方向去排查。
- 统计挂掉的机器的ip,看硬件分布,看出问题的机器是否都在同一台母机,可能是单个母机环境问题导致的panic,结论是虚拟机都非亲和性的分布在不同的物理机上,可以排除是问题的虚拟机在某个母机上的问题;
- 分析kdump文件(前提是机器开启了kdump):crash /usr/lib/debug/boot/vmlinux-4.14.81.bm.21-amd64 dump.202102251020,不同机器,命令情况不一样
OAD AVERAGE: 0.74, 0.77, 0.71
TASKS: 766
NODENAME: xxxx
RELEASE: 4.14.81.bm.21-amd64
VERSION: #1 SMP Debian 4.14.81.bm.21 Wed Apr 29 07:43:38 UTC 2020
MACHINE: x86_64 (2199 Mhz)
MEMORY: 31.4 GB
PANIC: "Kernel panic - not syncing: NMI: Not continuing"
PID: 2644936
COMMAND: "server"
TASK: ffff9a3fd280d000 [THREAD_INFO: ffff9a3fd280d000]
CPU: 6
STATE: TASK_RUNNING (PANIC)
crash> bt
PID: 2644936 TASK: ffff9a3fd280d000 CPU: 6 COMMAND: "server"
#0 [ffffa6ee8de47d70] machine_kexec at ffffffff9905749b
#1 [ffffa6ee8de47dc8] __crash_kexec at ffffffff99110d31
#2 [ffffa6ee8de47e88] panic at ffffffff9907cbb2
#3 [ffffa6ee8de47f10] nmi_panic at ffffffff9907c795
#4 [ffffa6ee8de47f18] unknown_nmi_error at ffffffff990280df
#5 [ffffa6ee8de47f30] do_nmi at ffffffff99028365
#6 [ffffa6ee8de47f50] nmi at ffffffff998018e3
RIP: 000000000040e35a RSP: 000000c0bf98d500 RFLAGS: 00000293
RAX: 000000000000004b RBX: 000000000000009b RCX: 000000c098114760
RDX: 0000000000000000 RSI: 00000000017602e0 RDI: 00000000000000d0
RBP: 000000c0bf98d530 R8: 0000000000000000 R9: 000000c057332b08
R10: 0000000000000000 R11: ffffffffffffffff R12: 0000000000000000
R13: 0000000000000080 R14: 0000000000000149 R15: ffffffffffffffff
ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b
crash> quit
能得出基本结论: aws的机器给虚拟机发了nmi中断,大概率是硬件问题
只能提交dmesg日志给aws的硬件工程师排查。
如何规避?
临时设置内核参数,关闭nmi panic
cat /etc/sysctl.conf
kernel.unknown_nmi_panic=0
kernel.panic_on_unrecovered_nmi = 0
sysctl -p 生效
分析工具
- dmeg日志
- Kdump
- Atop工具
- crash命令