概念 大数据主要解决, 海量数据的采集, 存储和分析计算问题
linux 的配置 设置终端的字体大小 setfont sun12x22
一些基本的配置 最小安装需要安装一些工具包
1 yum install net-tools wget zip unzip vim-enhanced net-tools -y
安装用于上传, 下载文件的工具包 rz 上传, zs 下载
配置静态 ip [root@hadoop200 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="a3b13aa9-1510-453c-8dc7-fab53450f73b" DEVICE="ens33" ONBOOT="yes" IPADDR=192.168.1.200 GATEWAY=192.168.1.2 DNS1=192.168.1.2
[root@hadoop200 ~]# vi /etc/hostname
[root@hadoop200 ~]# vi /etc/host
1 2 3 4 5 6 7 8 9 10 11 192.168.1.200 hadoop200 192.168.1.201 hadoop201 192.168.1.202 hadoop202 192.168.1.203 hadoop203 192.168.1.204 hadoop204 192.168.1.205 hadoop205 192.168.1.206 hadoop206 192.168.1.207 hadoop207 192.168.1.208 hadoop208
windows 中设置 host C:\Windows\System32\drivers\etc
1 2 3 4 5 6 7 8 9 192.168.1.200 hadoop200 192.168.1.201 hadoop201 192.168.1.202 hadoop202 192.168.1.203 hadoop203 192.168.1.204 hadoop204 192.168.1.205 hadoop205 192.168.1.206 hadoop206 192.168.1.207 hadoop207 192.168.1.208 hadoop208
关闭防火墙
1 2 3 [root@hadoop200 ~] [root@hadoop200 ~]
配置 atguigu 用户具有 root 权限,方便后期加 sudo 执行 root 权限的命令 [root@hadoop200 ~]# vim /etc/sudoers 修改/etc/sudoers 文件,在%wheel 这行下面添加一行,如下所示:
1 2 3 4 5 6 root ALL=(ALL) ALL %wheel ALL=(ALL) ALL xiamu ALL=(ALL) NOPASSWD:ALL
注意:atguigu 这一行不要直接放到 root 行下面,因为所有用户都属于 wheel 组,你先配置了 atguigu 具有免密功能,但是程序执行到%wheel 行时,该功能又被覆盖回需要密码。所以 atguigu 要放到%wheel 这行下面。
1 2 3 4 5 6 7 8 [xiamu@hadoop200 opt]$ mkdir module/ [xiamu@hadoop200 opt]$ mkdir software/ [xiamu@hadoop200 opt]$ sudo chown xiamu:xiamu module/ software/ [xiamu@hadoop200 opt]$ ll 总用量 0 drwxr-xr-x. 2 xiamu xiamu 6 12月 30 17:03 module drwxr-xr-x. 2 xiamu xiamu 6 12月 30 17:04 software
卸载自带 jdk 删除 java 如果有 java 请先删除 注意:如果你的虚拟机是最小化安装不需要执行这一步。
1 2 [root@hadoop100 ~] [root@hadoop100 ~]
重启之后的检查 hostname, ip, 以及能否 ping 通外网
安装 jdk 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [xiamu@hadoop202 software]$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "https://download.oracle.com/otn/java/jdk/8u212-b10/59066701cf1a433da9770636fbc4c9aa/jdk-8u212-linux-x64.tar.gz" [xiamu@hadoop202 software]$ tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module/ [xiamu@hadoop202 software]$ cd /opt/module/jdk1.8.0_212/ [xiamu@hadoop202 jdk1.8.0_212]$ vim /etc/profile [xiamu@hadoop202 jdk1.8.0_212]$ cd /etc/profile.d/ [xiamu@hadoop202 profile.d]$ sudo vim my_env.shexport JAVA_HOME=/opt/module/jdk1.8.0_212export PATH=$PATH :$JAVA_HOME /bin [xiamu@hadoop202 profile.d]$ source /etc/profile [xiamu@hadoop202 hadoop-3.1.3]$ java -version java version "1.8.0_212"
安装 Hadoop 1 2 3 4 5 6 7 8 9 10 [xiamu@hadoop202 software]$ tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/ [xiamu@hadoop202 hadoop-3.1.3]$ sudo vim /etc/profile.d/my_env.shexport HADOOP_HOME=/opt/module/hadoop-3.1.3export PATH=$PATH :$HADOOP_HOME /binexport PATH=$PATH :$HADOOP_HOME /sbin [xiamu@hadoop202 hadoop-3.1.3]$ source /etc/profile
hadoop 目录 bin: 常用的命令 hdfs, yarn, mapred 常用的文件夹 bin, etc, sbin
本地运行模式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [xiamu@hadoop202 hadoop-3.1.3]$ mkdir wcinput [xiamu@hadoop202 hadoop-3.1.3]$ cd wcinput [xiamu@hadoop202 wcinput]$ vim word.txt ss ss cls cls banzhang bobo yangge [xiamu@hadoop202 hadoop-3.1.3]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount wcinput/ ./wcoutput [xiamu@hadoop202 hadoop-3.1.3]$ cd wcoutput/ [xiamu@hadoop202 wcoutput]$ ll 总用量 4 -rw-r--r--. 1 xiamu xiamu 38 12月 31 01:07 part-r-00000 -rw-r--r--. 1 xiamu xiamu 0 12月 31 01:07 _SUCCESS [xiamu@hadoop202 wcoutput]$ cat part-r-00000 banzhang 1 bobo 1 cls 2 ss 2 yangge 1 [xiamu@hadoop202 hadoop-3.1.3]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount wcinput/ ./wcoutput org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/opt/module/hadoop-3.1.3/wcoutput already exists
scp 拷贝命令 1 2 3 4 5 6 7 8 9 [xiamu@hadoop202 module]$ scp -r jdk1.8.0_212/ xiamu@hadoop203:/opt/module/ [xiamu@hadoop203 module]$ scp -r xiamu@hadoop202:/opt/module/hadoop-3.1.3 ./ [xiamu@hadoop203 module]$ scp -r xiamu@hadoop202:/opt/module/* xiamu@hadoop204:/opt/module/
rsync 命令 1 2 3 4 5 6 [xiamu@hadoop202 module]$ sudo yum install rsync [xiamu@hadoop202 module]$ rsync -av hadoop-3.1.3/ xiamu@hadoop203:/opt/module/hadoop-3.1.3/
xsync 集群分发脚本 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [xiamu@hadoop202 ~]$ cd /home/xiamu/ [xiamu@hadoop202 ~]$ mkdir bin [xiamu@hadoop202 ~]$ cd bin/ [xiamu@hadoop202 bin]$ vim xsyncif [ $# -lt 1 ]then echo Not Enough Arguement! exit ;fi for host in hadoop202 hadoop203 hadoop204do echo ==================== $host ==================== for file in $@ do if [ -e $file ] then pdir=$(cd -P $(dirname $file ); pwd ) fname=$(basename $file ) ssh $host "mkdir -p $pdir " rsync -av $pdir /$fname $host :$pdir else echo $file does not exists! fi done done 修改脚本 xsync 具有执行权限 [xiamu@hadoop202 bin]$ chmod 777 xsync [xiamu@hadoop202 bin]$ xsync /home/xiamu/bin [xiamu@hadoop202 bin]$ sudo cp xsync /bin/ [xiamu@hadoop202 bin]$ sudo ./bin/xsync /etc/profile.d/my_env.sh 注意:如果用了sudo,那么xsync一定要给它的路径补全。 让环境变量生效 [xiamu@hadoop203 ~]$ source /etc/profile [xiamu@hadoop204 ~]$ source /etc/profile
设置 ssh 免密登录 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [xiamu@hadoop202 .ssh]$ pwd /home/xiamu/.ssh [xiamu@hadoop202 ~]$ ssh-keygen -t rsa [xiamu@hadoop202 ~]$ cd .ssh/ [xiamu@hadoop202 .ssh]$ ll 总用量 12 -rw-------. 1 xiamu xiamu 1675 12月 31 04:20 id_rsa -rw-r--r--. 1 xiamu xiamu 397 12月 31 04:20 id_rsa.pub -rw-r--r--. 1 xiamu xiamu 726 12月 31 02:22 known_hosts [xiamu@hadoop202 .ssh]$ cat id_rsa -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAs27rMM0gTqHTYpPExLsuhvzO2Z3hZPellncDj/pvqFClFCzy ... -----END RSA PRIVATE KEY----- [xiamu@hadoop202 .ssh]$ cat id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzbuswzSBOodNik8TEuy6G/M7ZneFk96WWdwOP+m+oUKUULPKxsadl++5yQLt+rlauC/jNF9S93TbJEasoBJHsxywTdwe4gmxT5Kx73kDTg41y9vn8gduZdxeOEe3bYvmsJKGpSyPw9ZIqz2mrvIjj1SHE9/CZHQzU9a4EnPePoltYHyP4wIzwauThuQcmFa2O3aZZEYxBdUZxLrFZviI02SQd4raHR7h5nMkGbsMWqjAq3Ky4p1i1Pji0P5Uj++L4A7aL2qiIKXcpW3JeUPx/xdvfPB+Q/llNUaMahIayacNVHbnBDGZdJC3e5qXplBzjcst/RB+KzF2nL8XZp0LX xiamu@hadoop202 [xiamu@hadoop202 .ssh]$ ssh-copy-id hadoop202 [xiamu@hadoop202 .ssh]$ ssh-copy-id hadoop203 [xiamu@hadoop202 .ssh]$ ssh-copy-id hadoop204 [xiamu@hadoop202 .ssh]$ ll 总用量 16 -rw-------. 1 xiamu xiamu 397 12月 31 04:30 authorized_keys -rw-------. 1 xiamu xiamu 1675 12月 31 04:20 id_rsa -rw-r--r--. 1 xiamu xiamu 397 12月 31 04:20 id_rsa.pub -rw-r--r--. 1 xiamu xiamu 726 12月 31 02:22 known_hosts known_hosts 记录ssh访问过计算机的公钥(public key) id_rsa 生成的私钥 id_rsa.pub 生成的公钥 authorized_keys 存放授权过的无密登录服务器公钥 注意: 还需要在hadoop103上采用xiamu账号配置一下无密登录到hadoop202、hadoop203、hadoop204服务器上。 还需要在hadoop104上采用xiamu账号配置一下无密登录到hadoop202、hadoop203、hadoop204服务器上。 还需要在hadoop102上采用root账号,配置一下无密登录到hadoop202、hadoop203、hadoop204;
集群配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 [xiamu@hadoop202 hadoop]$ vim core-site.xml <?xml version="1.0" encoding="UTF-8" ?> <?xml-stylesheet type ="text/xsl" href="configuration.xsl" ?> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop202:8020</value> </property> <!-- 指定hadoop数据的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-3.1.3/data</value> </property> <!-- 配置HDFS网页登录使用的静态用户为xiamu --> <property> <name>hadoop.http.staticuser.user</name> <value>xiamu</value> </property> </configuration> [xiamu@hadoop202 hadoop]$ vim hdfs-site.xml <?xml version="1.0" encoding="UTF-8" ?> <?xml-stylesheet type ="text/xsl" href="configuration.xsl" ?> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- nn web端访问地址--> <property> <name>dfs.namenode.http-address</name> <value>hadoop202:9870</value> </property> <!-- 2nn web端访问地址--> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop204:9868</value> </property> </configuration> [xiamu@hadoop202 hadoop]$ vim yarn-site.xml <?xml version="1.0" ?> <configuration> <!-- 指定MR走shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!-- 指定ResourceManager的地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop203</value> </property> <!-- 环境变量的继承 --> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> </configuration> [xiamu@hadoop202 hadoop]$ vim mapred-site.xml <?xml version="1.0" ?> <?xml-stylesheet type ="text/xsl" href="configuration.xsl" ?> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定MapReduce程序运行在Yarn上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
群起集群 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [xiamu@hadoop202 hadoop]$ pwd /opt/module/hadoop-3.1.3/etc/hadoop [xiamu@hadoop202 hadoop]$ vim workers hadoop202 hadoop203 hadoop204 [xiamu@hadoop202 hadoop-3.1.3]$ hdfs namenode -format [xiamu@hadoop202 hadoop-3.1.3]$ sbin/start-dfs.sh [xiamu@hadoop203 hadoop-3.1.3]$ sbin/start-yarn.sh [xiamu@hadoop202 hadoop-3.1.3]$ jps 1876 DataNode 2199 NodeManager 2296 Jps 1757 NameNode [xiamu@hadoop203 hadoop-3.1.3]$ jps 1714 ResourceManager 2163 Jps 1828 NodeManager 1531 DataNode [xiamu@hadoop204 ~]$ jps 1520 DataNode 1605 SecondaryNameNode 1749 NodeManager 1850 Jps
http://hadoop202:9870 Web 端查看 HDFS 的 NameNode Web 端查看 YARN 的 ResourceManagerhttp://hadoop203:8088/cluster
集群基本测试(上传文件) 1 2 3 4 5 [xiamu@hadoop202 hadoop-3.1.3]$ hadoop fs -mkdir /wcinput [xiamu@hadoop202 hadoop-3.1.3]$ hadoop fs -put wcinput/word.txt /wcinput
执行完成之后页面多了一个目录, 目录中包含着 word.txt 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [xiamu@hadoop202 hadoop-3.1.3]$ hadoop fs -put /opt/software/jdk-8u212-linux-x64.tar.gz / [xiamu@hadoop202 subdir0]$ pwd /opt/module/hadoop-3.1.3/data/dfs/data/current/BP-1090801922-192.168.1.202-1672543197610/current/finalized/subdir0/subdir0 [xiamu@hadoop202 subdir0]$ ls blk_1073741825 blk_1073741826_1002.meta blk_1073741825_1001.meta blk_1073741827 blk_1073741826 blk_1073741827_1003.meta [xiamu@hadoop202 subdir0]$ cat blk_1073741826 >> tmp.tar.gz [xiamu@hadoop202 subdir0]$ cat blk_1073741827 >> tmp.tar.gz [xiamu@hadoop202 subdir0]$ tar -zxvf tmp.tar.gz [xiamu@hadoop202 subdir0]$ ls blk_1073741825 blk_1073741826 blk_1073741827 jdk1.8.0_212 blk_1073741825_1001.meta blk_1073741826_1002.meta blk_1073741827_1003.meta tmp.tar.gz [xiamu@hadoop202 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /wcinput /123
集群崩溃重启方案 1 2 3 4 5 6 7 8 9 10 11 12 13 删除所有集群hadoop根目录下的data, log 目录 停止dfs和yarn [xiamu@hadoop202 hadoop-3.1.3]$ stop-dfs.sh [xiamu@hadoop203 hadoop-3.1.3]$ stop-yarn.sh [xiamu@hadoop202 hadoop-3.1.3]$ hdfs namenode -format [xiamu@hadoop202 hadoop-3.1.3]$ start-dfs.sh [xiamu@hadoop202 hadoop-3.1.3]$ start-yarn.sh
配置历史服务器 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [xiamu@hadoop202 hadoop]$ vim mapred-site.xml <configuration> <!-- 指定MapReduce程序运行在Yarn上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 历史服务器端地址 --> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop202:10020</value> </property> <!-- 历史服务器web端地址 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop202:19888</value> </property> </configuration> [xiamu@hadoop202 hadoop]$ xsync mapred-site.xml [xiamu@hadoop202 hadoop]$ mapred --daemon start historyserver [xiamu@hadoop202 hadoop]$ jps 4420 NodeManager 4117 DataNode 4661 Jps 4620 JobHistoryServer 3998 NameNode [xiamu@hadoop203 hadoop]$ stop-yarn.sh [xiamu@hadoop203 hadoop]$ start-yarn.sh [xiamu@hadoop202 hadoop-3.1.3]$ hadoop fs -mkdir /input [xiamu@hadoop202 hadoop-3.1.3]$ hadoop fs -put ./wcinput/word.txt /input [xiamu@hadoop202 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
配置日志的聚集 日志聚集概念:应用运行完成以后,将程序运行日志信息上传到 HDFS 系统上。 日志聚集功能好处:可以方便的查看到程序运行详情,方便开发调试。 注意:开启日志聚集功能,需要重新启动 NodeManager 、ResourceManager 和 HistoryServer。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [xiamu@hadoop202 hadoop]$ vim yarn-site.xml <!-- 开启日志聚集功能 --> <property> <name>yarn.log-aggregation-enable</name> <value>true </value> </property> <!-- 设置日志聚集服务器地址 --> <property> <name>yarn.log.server.url</name> <value>http://hadoop202:19888/jobhistory/logs</value> </property> <!-- 设置日志保留时间为7天 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> [xiamu@hadoop202 hadoop]$ xsync yarn-site.xml [xiamu@hadoop203 hadoop]$ stop-yarn.sh [xiamu@hadoop202 hadoop]$ mapred --daemon stop historyserver [xiamu@hadoop203 hadoop]$ start-yarn.sh [xiamu@hadoop202 hadoop]$ mapred --daemon start historyserver [xiamu@hadoop202 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2
从http://hadoop203:8088/cluster 访问, 查看当前正在执行的程序, 执行完成之后就可以点进去查看日志
编写 Hadoop 集群常用脚本 Hadoop 集群启停脚本(包含 HDFS,Yarn,Historyserver):myhadoop.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [xiamu@hadoop202 ~]$ cd /home/xiamu/bin/ [xiamu@hadoop202 bin]$ vim myhadoop.shif [ $# -lt 1 ]then echo "No Args Input..." exit ;fi case $1 in "start" ) echo " =================== 启动 hadoop集群 ===================" echo " --------------- 启动 hdfs ---------------" ssh hadoop202 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh" echo " --------------- 启动 yarn ---------------" ssh hadoop203 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh" echo " --------------- 启动 historyserver ---------------" ssh hadoop202 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver" ;;"stop" ) echo " =================== 关闭 hadoop集群 ===================" echo " --------------- 关闭 historyserver ---------------" ssh hadoop202 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver" echo " --------------- 关闭 yarn ---------------" ssh hadoop203 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh" echo " --------------- 关闭 hdfs ---------------" ssh hadoop202 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh" ;; *) echo "Input Args Error..." ;;esac [xiamu@hadoop202 bin]$ chmod 777 myhadoop.sh [xiamu@hadoop202 bin]$ myhadoop.sh stop [xiamu@hadoop202 ~]$ myhadoop.sh start
查看三台服务器 Java 进程脚本:jpsall
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [xiamu@hadoop202 ~]$ cd /home/xiamu/bin [xiamu@hadoop202 bin]$ vim jpsallfor host in hadoop202 hadoop203 hadoop204do echo =============== $host =============== ssh $host jpsdone [xiamu@hadoop202 bin]$ chmod 777 jpsall [xiamu@hadoop202 bin]$ jpsall [xiamu@hadoop202 ~]$ xsync /home/xiamu/bin/
常用端口号 hadoop3.x HDFS NameNode 内部通信端口: 8020/9000/9820 HDFS NameNode 对用户的查询端口: 9870 Yarn 查看任务运行情况的: 8088 历史服务器: 19888 hadoop2.x HDFS NameNode 内部通信端口: 8020/9000 HDFS NameNode 对用户的查询端口: 50070 Yarn 查看任务运行情况的: 8088 历史服务器: 19888
常用的配置文件 3.x core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml workers 2.x core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml slaves
集群的时间同步(可选) 一般来说, 集群连接了外网就可以不用配置 设置完成之后不知道为啥会 1 Jan 22:31:36 ntpdate[1532]: bind() fails: Permission denied
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 [root@hadoop202 xiamu] [root@hadoop202 xiamu] [root@hadoop202 xiamu] [root@hadoop202 xiamu] 1. 把 修改成 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap 192.168.1.0表示是192.168.1这个网段的 2.把 server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst 注释掉 3.文件末尾追加两句 erver 127.127.1.0 fudge 127.127.1.0 stratum 10 [root@hadoop202 xiamu] SYNC_HWCLOCK=yes [root@hadoop202 xiamu] [root@hadoop202 xiamu] hadoop203主机 [root@hadoop203 bin] [root@hadoop203 bin] [root@hadoop203 bin] */1 * * * * /usr/sbin/ntpdate hadoop202 [root@hadoop203 bin] 2021年 09月 11日 星期六 11:11:11 CST [root@hadoop203 bin] 2021年 09月 11日 星期六 11:11:17 CST
补充 1 [xiamu@hadoop202 ~]$ hadoop fs -put /opt/software/jdk-8u212-linux-x64.tar.gz /
块大小是 128M, 但是 jdk 的大小超过了块大小, 所有 jdk 有两个块