搜索
您的当前位置:首页正文

解决:hadoop safemode is on

来源:步旅网

hadoop的safemode一直是开启状态,设置手动关闭之后还会自动进入开启状态:

一,第一种情况:block块丢失

打开50070页面,有如下警告界面:

here are 6 missing blocks. The following files may be corrupted:


blk_1073744534 /tmp/hive/root/6f4a95a4-5b21-4624-baae-4e61fd16c40e/hive_2021-05-21_09-53-04_887_3047558412324042111-1/dummy_path/dummy_file
blk_1073744536 /tmp/hive/root/6f4a95a4-5b21-4624-baae-4e61fd16c40e/hive_2021-05-21_09-53-04_887_3047558412324042111-1/-mr-10004/c9c3653a-732b-4baf-8fdb-a1f715b6d22b/map.xml
blk_1073744538 /tmp/hadoop-yarn/staging/root/.staging/job_1621505231151_0003/job.jar
blk_1073744540 /tmp/hadoop-yarn/staging/root/.staging/job_1621505231151_0003/job.split
blk_1073744542 /tmp/hadoop-yarn/staging/root/.staging/job_1621505231151_0003/job.splitmetainfo
blk_1073744544 /tmp/hadoop-yarn/staging/root/.staging/job_1621505231151_0003/job.xml
Please check the logs or run fsck in order to identify the missing blocks. See the Hadoop FAQ for common causes and potential solutions.

如果确认这些文件都是垃圾数据,可以用命令行把这些文件删掉

hdfs fsck -delete  /tmp/hadoop-yarn/staging/root/.staging/job_1621505231151_0003/job.xml

删除时,因为safemode是开启状态,删除会失败,手动关闭safemode:

hadoop dfsadmin -safemode leave

第二种情况:服务器资源耗尽

进入服务器,执行命令:

df -h 
 du -sh  /*

之后再查看磁盘使用情况:

[root@node1 bin]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G   12M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  9.3G  7.8G  55% /
/dev/sda1               1014M  153M  862M  16% /boot
tmpfs                    378M     0  378M   0% /run/user/0

删除这个文件夹之后要注意两个问题:

  • 一是重启之前记得格式化namemode
hadoop namenode -format
// 或者
hdfs namenode -format
  • 二是在hadoop的安装根目录下重新创建几个必要的文件夹
mkdir -p hadoopDatas/namenodeDatas

mkdir -p nn/edits/

因篇幅问题不能全部显示,请点此查看更多更全内容

Top