CentOS的Hadoop集群配置

Posted on

CentOS的Hadoop集群配置

CentOS的Hadoop集群配置(一)

参考资料:

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/

http://hadoop.apache.org/common/docs/current/cluster_setup.html

以下集群配置内容,以两台机器为例。其中一台是 master ,另一台是 slave1 。

master 上运行 name node, data node, task tracker, job tracker , secondary name node ;

slave1 上运行 data node, task tracker 。

前面加 /* 表示对两台机器采取相同的操作

  1. 安装 JDK /*

yum install java-1.6.0-openjdk-devel

  1. 设置环境变量 /*

编辑 /etc/profile 文件,设置 JAVA_HOME 环境变量以及类路径:

export JAVA_HOME="/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64"

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar

  1. 添加 hosts 的映射 /*

编辑 /etc/hosts 文件,注意 host name **不要有下划线,见下步骤 9**

192.168.225.16 master

192.168.225.66 slave1

  1. 配置 SSH /*

cd /root & mkdir .ssh

chmod 700 .ssh & cd .ssh

创建密码为空的 RSA 密钥对:

ssh-keygen -t rsa -P ""

在提示的对称密钥名称中输入 id_rsa

将公钥添加至 authorized_keys 中:

cat id_rsa.pub >> authorized_keys

chmod 644 authorized_keys /# 重要

编辑 sshd 配置文件 /etc/ssh/sshd_config ,把 /#AuthorizedKeysFile .ssh/authorized_keys 前面的注释取消掉。

重启 sshd 服务:

service sshd restart

测试 SSH 连接。连接时会提示是否连接,按回车后会将此公钥加入至 knows_hosts 中:

ssh localhost

  1. 配置 master 和 slave1 的 ssh 互通

在 slave1 中重复步骤 4 ,然后把 slave1 中的 .ssh/authorized_keys 复制至 master 的 .ssh/authorized_keys中。注意复制过去之后,要看最后的类似 root@localhost 的字符串,修改成 root@slave1 。同样将 master的 key 也复制至 slave1 ,并将最后的串修改成 root@master 。

或者使用如下命令:

ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave1

测试 SSH 连接:

在 master 上运行:

ssh slave1

在 slave1 上运行:

ssh master

  1. 安装 Hadoop

下载 hadoop 安装包:

wget http://mirror.bjtu.edu.cn/apache/hadoop/common/hadoop-0.20.203.0/hadoop-0.20.203.0rc1.tar.gz

复制安装包至 slave1 :

scp hadoop-0.20.203.0rc1.tar.gz root@slave1:/root/

解压:

tar xzvf hadoop-0.20.203.0rc1.tar.gz

mkdir /usr/local/hadoop

mv hadoop-0.20.203.0//* /usr/local/hadoop

     修改 .bashrc 文件(位于用户目录下,即 ~/.bashrc ,对于 root ,即为 /root/.bashrc )

     添加环境变量:

     export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$HADOOP_HOME/bin
  1. 配置 Hadoop 环境变量 /*

以下所有 hadoop **目录下的文件,均以相对路径 hadoop **开始

修改 hadoop/conf/hadoop-env.sh 文件,将里面的 JAVA_HOME 改成步骤 2 中设置的值。

  1. 创建 Hadoop 本地临时文件夹 /*

mkdir /root/hadoop_tmp (注意这一步,千万不要放在 /tmp **目录下面!!因为 /tmp 默认分配的空间是很小的,往 hdfs 里放几个大文件就会导致空间满了,就会报错)**

修改权限:

chown -R hadoop:hadoop /root/hadoop_tmp

更松地,也可以这样:

chmod –R 777 /root/hadoop_tmp

  1. 配置 Hadoop

修改 master 的 hadoop/conf/core-site.xml ,在 节中添加如下内容:

注意: fs.default.name **的值不能带下划线**

hadoop.tmp.dir /root/hadooptmp/hadoop${user.name} fs.default.name hdfs://localhost:54310 io.sort.mb 1024
     其中 io.sort.mb 值,指定了排序使用的内存,大的内存可以加快 job 的处理速度。



     修改 hadoop/conf/mapred-site.xml ,在 <configuration> 节中添加如下内容:
mapred.job.tracker localhost:54311 mapred.map.child.java.opts -Xmx4096m mapred.reduce.child.java.opts -Xmx4096m
     其中 mapred.map.child.java.opts, mapred.reduce.child.java.opts 分别指定 map/reduce 任务使用的最大堆内存。较小的内存可能导致程序抛出 OutOfMemoryException 。

修改 conf/hdfs -site.xml ,在 节中添加如下内容:

dfs.replication 2

同样,修改 slave1 的 /usr/local/hadoop/conf/core-site.xml ,在 节中添加如下内容:

hadoop.tmp.dir /root/hadooptmp/hadoop${user.name} fs.default.name hdfs://localhost:54310 io.sort.mb 1024
     修改 conf/mapred-site.xml ,在 <configuration> 节中添加如下内容:
mapred.job.tracker localhost:54311 mapred.map.child.java.opts -Xmx4096m mapred.reduce.child.java.opts -Xmx4096m
     修改 conf/hdfs -site.xml ,在 <configuration> 节中添加如下内容:
dfs.replication 2
  1. 修改 hadoop/bin/hadoop 文件

把 221 行修改成如下。因为对于 root 用户, -jvm 参数是有问题的,所以需要加一个判断 ( 或者以非 root 用户运行这个脚本也没问题 )

HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS" à

/#for root, -jvm option is invalid.

CUR_USER=`whoami`

if [ "$CUR_USER" = "root" ]; then

    HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"

else

    HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"

fi 

unset $CUR_USER

至此, master 和 slave1 都已经完成了 single_node 的搭建,可以分别在两台机器上测试单节点。

启动节点:

hadoop/bin/start-all.sh

运行 jps 命令,应能看到类似如下的输出:

937 DataNode

9232 Jps

8811 NameNode

12033 JobTracker

12041 TaskTracker 来源: [http://blog.csdn.net/inte_sleeper/article/details/6569985](http://blog.csdn.net/inte_sleeper/article/details/6569985)

CentOS的Hadoop集群配置(二) 下面的教程把它们合并至 multi-node cluster 。

  1. 合并 single-node 至 multi-node cluster

修改 master 的 hadoop/conf/core-site.xml :

hadoop.tmp.dir /root/hadooptmp/hadoop${user.name} fs.default.name hdfs://master :54310 io.sort.mb 1024
     修改 conf/mapred-site.xml ,在 <configuration> 节中添加如下内容:
mapred.job.tracker master :54311 mapred.map.child.java.opts -Xmx4096m mapred.reduce.child.java.opts -Xmx4096m
     修改 conf/hdfs -site.xml ,在 <configuration> 节中添加如下内容:
dfs.replication 2

把这三个文件复制至 slave1 相应的目录 hadoop/conf 中 ( **即 master 和 slave1 的内容完全一致 )**

     修改所有节点的 hadoop/conf/masters ,把文件内容改成: master

     修改所有节点的 hadoop/conf/slaves ,把文件内容改成:

master

slave1



     分别删除 master 和 slave1 的 dfs/data 文件:

     rm –rf /root/hadoop_tmp/hadoop_root/dfs/data

重新格式化 namenode :

     hadoop/bin/hadoop namenode -format



     测试,在 master 上运行:

     hadoop/bin/start-all.sh

     在 master 上运行 jps 命令

此时输出应类似于:

     11648 TaskTracker

11166 NameNode

11433 SecondaryNameNode

12552 Jps

11282 DataNode

11525 JobTracker

在 slave1 上运行 jps

此时输出应包含 ( 即至少有 DataNode, 否则即为出错 ) :

3950 Jps

3121 TaskTracker

3044 DataNode

  1. 测试一个 JOB

首先升级 python( 可选,如果 JOB 是 python 写的 ) :

cd /etc/yum.repos.d/

wget http://mirrors.geekymedia.com/centos/geekymedia.repo

yum makecache

yum -y install python26

升级 python **的教程,见另外一篇文档。如果已经通过以上方法安装了 python2.6 **,那需要先卸载:

yum remove python26 python26-devel

     CentOS 的 yum 依赖于 python2.4 ,而 /usr/bin 中 python 程序即为 python2.4 。我们需要把它修改成python2.6 。



     cd /usr/bin/

     编辑 yum 文件,把第一行的

/#!/usr/bin/python à /#!/usr/bin/python2.4

保存文件。

     删除旧版本的 python 可执行文件(这个文件跟该目录下 python2.4 其实是一样的,所以可以直接删除)

     rm -f python

     让 python 指向 python2.6 的可执行程序。

     ln -s python26 python  
  1. Word count python 版本

Map.py

/#! /usr/bin/python

import sys;

for line in sys.stdin:

line = line.strip();

words = line.split();

for word in words:

  print '%s/t%s' % (word,1);

Reduce.py

/#!/usr/bin/python

import sys;

wc = {};

for line in sys.stdin:

line = line.strip();

word,count = line.split('/t',1);

try:

  count = int(count);

except Error:

  pass;

if wc.has_key(word):

  wc[word] += count;

else: wc[word] = count;

for key in wc.keys():

print '%s/t%s' % (key, wc[key]);

本机测试:

echo "foo foo bar bar foo abc" | map.py

echo "foo foo bar bar foo abc" | map.py | sort | reduce.py

在 hadoop 中测试:

hadoop jar /usr/local/hadoop/contrib/streaming/hadoop-streaming-0.20.203.0.jar -file mapper.py -mapper mapper.py -file reducer.py -reducer reducer.py -input wc//* -output wc-out

Job 成功后,会在 HDFS 中生成 wc-out 目录。

查看结果:

hadoop fs –ls wc-out

hadoop fs –cat wc-out/part-00000

  1. 集群增加新节点

a. 执行步骤 1 , 2.

b. 修改 hosts 文件,将集群中的 hosts 加入本身 /etc/hosts 中。并修改集群中其他节点的 hosts ,将新节点加入。

c. master 的 conf/slaves 文件中,添加新节点。

d. 启动 datanode 和 task tracker 。

hadoop-daemon.sh start datanode

hadoop-daemon.sh start tasktracker

  1. Trouble-shooting

hadoop 的日志在 hadoop/logs 中。

其中, logs 根目录包含的是 namenode, datanode, jobtracker, tasktracker 等的日志。分别以 hadoop-{username}-namenode/datanode/jobtracker/tasktracker-hostname.log 命名。

userlogs 目录里包含了具体的 job 日志,每个 job 有一个单独的目录,以 jobYYYYmmddHHmm_xxxx 命名。里面包含数个 attempt{jobname}m_xxxxx 或 attempt{jobname}_r_xxxx 等数个目录。其中目录名中的m 表示 map 任务的日志, r 表示 reduce 任务的日志。因此,出错时,可以有针对性地查看特定的日志。

常见错误:

  1. 出现类似:

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs …

的异常,是因为先格式化了 namenode ,后来又修改了配置导致。将 dfs/data 文件夹内容删除,再重新格式化 namenode 即可。

  1. 出现类似:

INFO org.apache.hadoop.ipc.Client: Retrying connect to server:…

的异常,首先确认 name node 是否启动。如果已经启动,有可能是 master 或 slave1 中的配置出错,集群配置参考步骤 11 。也有可能是防火墙问题,需添加以下例外:

50010 端口用于数据传输, 50020 用于 RPC 调用, 50030 是 WEB 版的 JOB 状态监控, 54311 是job tracker , 54310 是与 master 通信的端口。

完整的端口列表见:

http://www.cloudera.com/blog/2009/08/hadoop-default-ports-quick-reference/

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50010 -j ACCEPT

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50020 -j ACCEPT

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50030 -j ACCEPT

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50060 -j ACCEPT

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 54310 -j ACCEPT

iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 54311 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 50010 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 50020 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 50010 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 50020 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 50030 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 50030 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 50060 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 50060 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 54310 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 54310 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --dport 54311 -j ACCEPT

iptables -A OUTPUT -p tcp -m tcp --sport 54311 -j ACCEPT

保存规则:

/etc/init.d/iptables save

重启 iptables 服务:

service iptables restart

如果还是出现问题 2 的错误,那可能需要手工修改 /etc/sysconfig/iptables 的规则。手动添加这些规则。若有 ”reject-with icmp-host-prohibited” 的规则,需将规则加到它的前面。注意修改配置文件的时候,不需要带 iptables 命令。直接为类似于:

-A OUTPUT -p tcp -m tcp --sport 54311 -j ACCEPT

或关闭防火墙 ( **建议,因为端口太多,要加的例外很多 )**

service iptables stop

  1. 在 /etc/hosts 文件中,确保一个 host 只对应一个 IP ,否则会出错(如同时将 slave1 指向 127.0.0.1 和192.168.225.66 ),可能导致数据无法从一个节点复制至另一节点。

  2. 出现类似:

FATAL org.apache.hadoop.mapred.TaskTracker: Error running child : java.lang.OutOfMemoryError: Java heap space…

的异常,是因为堆内存不够。有以下几个地方可以考虑配置:

a. conf/hadoop-env.sh 中, export HADOOP_HEAPSIZE=1000 这一行,默认为注释掉,堆大小为1000M ,可以取消注释,将这个值调大一些(对于 16G 的内存,可以调至 8G )。

b. conf/mapred-site.xml 中,添加 mapred.map.child.java.opts 属性,手动指定 JAVA 堆的参数值为 -Xmx2048m 或更大。这个值调整 map 任务的堆大小。即:

mapred.map.child.java.opts -Xmx2048m

c. conf/mapred-site.xml 中,添加 mapred.reduce.child.java.opts 属性,手动指定 JAVA 堆的参数值为 -Xmx2048m 或更大。这个值调整 reduce 任务的堆大小。即:

mapred.reduce.child.java.opts -Xmx2048m
               注意调整这些值之后,要重启 name node 。

5. 出现类似: java.io.IOException: File /user/root/pv_product_110124 could only be replicated to 0 nodes, instead of 1…

的异常,首先确保 hadoop 临时文件夹中有足够的空间,空间不够会导致这个错误。

如果空间没问题,那就尝试把临时文件夹中 dfs/data 目录删除,然后重新格式化 name node :

hadoop namenode -format

注意:此命令会删除 hdfs 上的文件

6. 出现类似: java.io.IOException: Broken pipe…

的异常,检查你的程序吧,没准输出了不该输出的信息,如调试信息等。 来源: [http://blog.csdn.net/inte_sleeper/article/details/6569990](http://blog.csdn.net/inte_sleeper/article/details/6569990)

Oracle not in查不到应有的结果

Posted on

Oracle not in查不到应有的结果

Windows LiveWindows Live™

威's profileX-SpiritPhotosBlogListsMore

Blog

  • [

Entries ](http://x-spirit.spaces.live.com/?_c11_BlogPart_BlogPart=blogview&_c=BlogPart)

  • [

Summary ](http://x-spirit.spaces.live.com/?_c11_BlogPart_BlogPart=summary&_c=BlogPart) Listed by: Date Category

Sorry, your entry can't be deleted right now. Please try again later.

January 25

Oracle not in查不到应有的结果

首先我要感谢aa和Liu Xing帮我发现了我日志中的错误。之前比较粗心,把3条SQL语句写成一样的了,对于给读者造成的麻烦,我深表抱歉。 今天我把原文做了修订,为了对得起读者对我的关注,我重新深入的研究了这个问题,在后面,我会把来龙去脉写清楚。 问题: 语句1:

Select / from table1 A where A.col1 not in ( select col1 from table2 B ) 如果这样,本来应该有一条数据,结果没有。 如果我改写成这样: 语句2: *

select / from table1 A where not exists (SELECT / FROM table2 B where B.col1 = A.col1) 结果就正确,有一条数据显示。 经过一番搜索,原以为是子查询结果集太大的原因。 后来有网上强人指点:子查询里面有空集。即子查询的结果集里面有NULL的结果。 把查询语句修改成:**语句3:

Select / from table1 A where A.col1 not in ( select col1 from table2 B where B.col1 is not null ) 果然就查出来了。而且一点不差。。。厉害阿~~~ 下面是针对本文题的分析: 1。 首先来说说Oracle中的NULL。 Oracle中的NULL代表的是无意义,或者没有值。将NULL和其他的值进行逻辑运算,运算过程中,NULL的表现更象是FALSE。 下面请看真值表: AND NULL OR NULL TRUE NULL TRUE FALSE FALSE NULL NULL NULL NULL 另外,NULL和其他的值进行比较或者算术运算(<、>、=、!=、+、-、/、/),结果仍是NULL。 如果想要判定某个值是否为NULL,可以用IS NULL或者IS NOT NULL。

  1. 再来说说Oracle中的IN。 in是一个成员条件, 对于给定的一个集合或者子查询,它会比较每一个成员值。 IN功能上相当于 =ANY 的操作,而NOT IN 功能上相当于 !=ALL 的操作。 IN在逻辑上实际上就是对给定的成员集合或者子查询结果集进行逐条的判定,例如: SELECT / FROM table1 A WHERE A.col1 in (20,50,NULL);实际上就是执行了 SELECT / FROM table1 A WHERE A.col1=20 OR A.col1=50 OR A.col1=NULL;这样,根据NULL的运算特点和真值表,我们可以看出,上边这个WHERE 字句可以被简化(如果返回NULL则无结果集返回,这一点和FALSE是一样的)为 WHERE A.col1=20 OR A.col1=50也就是说,如果你的table1中真的存在含有NULL值的col1列,则执行该语句,无法查询出那些值为null的记录。 再来看看NOT IN。根据逻辑运算关系,我们知道,NOT (X=Y OR N=M) 等价于 X!=Y AND N!=M,那么: SELECT / FROM table1 A WHERE A.col1 not in (20,50,NULL)等价于 SELECT / FROM table1 A WHERE A.col1!=20 AND A.col1!=50 AND A.col1!=NULL根据NULL的运算特性和真值表,该语句无论前两个判定条件是否为真,其结果一定是NULL或者FALSE。故绝对没有任何记录可以返回。 这就是为什么语句1查不到应有结果的原因。当然,如果你用NOT IN的时候,预先在子查询里把NULL去掉的话,那就没问题了,例如语句3。 有些童鞋可能要问了:那如果我想把A表里面那些和B表一样col1列的值一样的记录都查出来,即便A、B两表里面的col1列都包括值为NULL的记录的话,用这一条语句就没办法了吗? 我只能很遗憾的告诉你,如果你想在WHERE后面单纯用IN 似乎不太可能了,当然,你可以在外部的查询语句中将NULL条件并列进去,例如: SELECT /* FROM table1 A WHERE A.col1 in (SELECT B.col1 FROM table2 B) OR A.col1 IS NULL;
  2. 最后谈谈EXISTS。 有人说EXISTS的性能比IN要好。但这是很片面的。我们来看看EXISTS的执行过程: select / from t1 where exists ( select / from t2 where t2.col1 = t1.col1 )相当于: for x in ( select / from t1 ) loop if ( exists ( select / from t2 where t2.col1 = x.col1 ) then
      OUTPUT THE RECORD in x
    
    end if end loop 也就是说,EXISTS语句实际上是通过循环外部查询的结果集,来过滤出符合子查询标准的结果集。于是外部查询的结果集数量对该语句执行性能影响最大,故如果外部查询的结果集数量庞大,用EXISTS语句的性能也不一定就会好很多。 当然,有人说NOT IN是对外部查询和子查询都做了全表扫描,如果有索引的话,还用不上索引,但是NOT EXISTS是做连接查询,所以,如果连接查询的两列都做了索引,性能会有一定的提升。 当然至于实际的查询效率,我想还是具体情况具体分析吧。 那么我们不妨来分析一下语句2为什么能够的到正确的结果吧: 语句2是这样的:**

select / from table1 A where not exists (SELECT B.col1 FROM table2 B where B.col1 = A.col1) 实际上是这样的执行过程: for x in ( select / from table1 A ) loop if (not exists ( select / from table2 B where B.col1 = x.col1 ) then OUTPUT THE RECORD in x end if end loop 由于表A中不包含NULL的记录,所以,遍历完表A,也只能挑出表A中独有的记录。 这就是为什么语句2能够完成语句3的任务的原因。 但如果表A中存在NULL记录而表B中不存在呢? 这个问题请大家自己分析吧。哈哈。有答案了可以给我留言哦。 答案:A表中的NULL也会被查出来。因为select / from table2 B where B.col1 = NULL不返回结果,故 not exists ( select /* from table2 B where B.col1 = x.col1 )的值为真。 以上SQL运行结果在MySQL和Oracle上都已经通过。

11:33 AM | Blog it | 计算机与 Internet

Comments (1)

Please wait...

Sorry, the comment you entered is too long. Please shorten it. You didn't enter anything. Please try again.

Sorry, we can't add your comment right now. Please try again later. To add a comment, you need permission from your parent. Ask for permission

Your parent has turned off comments. Sorry, we can't delete your comment right now. Please try again later.

You've exceeded the maximum number of comments that can be left in one day. Please try again in 24 hours. Your account has had the ability to leave comments disabled because our systems indicate that you may be spamming other users. If you believe that your account has been disabled in error please contact Windows Live support.

Complete the security check below to finish leaving your comment. The characters you type in the security check must match the characters in the picture or audio. To add a comment, sign in with your Windows Live ID (if you use Hotmail, Messenger, or Xbox LIVE, you have a Windows Live ID). Sign in

Don't have a Windows Live ID? Sign up Picture of Xing Liu

Picture of Xing Liu Xing Liuwrote:

这几个命令行怎么都一样啊?

Feb. 11

Trackbacks

The trackback URL for this entry is:

http://x-spirit.spaces.live.com/blog/cns!CC0B04AE126337C0!844.trak Weblogs that reference this entry

redhat6更换本地软件管理工具

Posted on

redhat6更换本地软件管理工具

[root@redhat6 robin]/# yum install gnome-packagekit

Loaded plugins: aliases, auto-update-debuginfo, changelog, dellsysid,

          : downloadonly, fastestmirror, filter-data, fs-snapshot, keys,

          : list-data, local, merge-conf, post-transaction-actions, presto,

          : priorities, product-id, protectbase, ps, refresh-packagekit,

          : remove-with-leaves, rpm-warm-cache, security, show-leaves,

          : subscription-manager, tmprepo, tsflags, upgrade-helper, verify,

          : versionlock

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile

/* elrepo: elrepo.org

/* epel: mirrors.yun-idc.com

/* jpackage: sunsite.informatik.rwth-aachen.de

/* rpmforge: mirror.oscc.org.my

/* rpmfusion-nonfree-updates: ftp.sjtu.edu.cn

_local | 2.9 kB 00:00 ...

_local/primary_db | 89 kB 00:00 ...

Skipping filters plugin, no data

246 packages excluded due to repository priority protections

0 packages excluded due to repository protections

Setting up Install Process

Resolving Dependencies

Skipping filters plugin, no data

--> Running transaction check

---> Package gnome-packagekit.x86_64 0:2.28.3-7.el6 will be installed

--> Processing Dependency: PackageKit-device-rebind >= 0.5.0 for package: gnome-packagekit-2.28.3-7.el6.x86_64

--> Running transaction check

---> Package PackageKit-device-rebind.x86_64 0:0.5.8-21.el6 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package Arch Version Repository

                                                                       Size

================================================================================

Installing:

gnome-packagekit x86_64 2.28.3-7.el6 base 2.5 M

Installing for dependencies:

PackageKit-device-rebind x86_64 0.5.8-21.el6 base 95 k

Transaction Summary

================================================================================

Install 2 Package(s)

Total download size: 2.6 M

Installed size: 8.1 M

Is this ok [y/N]: y

Downloading Packages:

Setting up and reading Presto delta metadata

Processing delta metadata

Package(s) data still to download: 2.6 M

(1/2): PackageKit-device-rebind-0.5.8-21.el6.x86_64.rpm | 95 kB 00:00

(2/2): gnome-packagekit-2.28.3-7.el6.x86_64.rpm | 2.5 MB 00:14


Total 182 kB/s | 2.6 MB 00:14

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Installing : PackageKit-device-rebind-0.5.8-21.el6.x86_64 1/2

Installing : gnome-packagekit-2.28.3-7.el6.x86_64 2/2

Verifying : gnome-packagekit-2.28.3-7.el6.x86_64 1/2

Verifying : PackageKit-device-rebind-0.5.8-21.el6.x86_64 2/2

Installed:

gnome-packagekit.x86_64 0:2.28.3-7.el6

Dependency Installed:

PackageKit-device-rebind.x86_64 0:0.5.8-21.el6

Complete!

New leaves:

gnome-packagekit.x86_64

替换本地yum源weiCentOS6的163源,见《Redhat 使用CentOS的yum源进行升级或软件安装(最新6.0-6.4,不含最新版6.5)》

在yum makecache后

执行: yum remove PackageKit

[root@redhat6 robin]/# yum update gnome-packagekit

Loaded plugins: aliases, auto-update-debuginfo, changelog, dellsysid,

          : downloadonly, fastestmirror, filter-data, fs-snapshot, keys,

          : list-data, local, merge-conf, post-transaction-actions, presto,

          : priorities, product-id, protectbase, ps, remove-with-leaves,

          : rpm-warm-cache, security, show-leaves, subscription-manager,

          : tmprepo, tsflags, upgrade-helper, verify, versionlock

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile

/* elrepo: elrepo.org

/* epel: mirrors.yun-idc.com

/* jpackage: mirror.ibcp.fr

/* rpmforge: mirror.oscc.org.my

/* rpmfusion-nonfree-updates: ftp.sjtu.edu.cn

Skipping filters plugin, no data

246 packages excluded due to repository priority protections

0 packages excluded due to repository protections

Setting up Update Process

No Packages marked for Update

[root@redhat6 robin]/# yum remove PackageKit

Loaded plugins: aliases, auto-update-debuginfo, changelog, dellsysid,

          : downloadonly, fastestmirror, filter-data, fs-snapshot, keys,

          : list-data, local, merge-conf, post-transaction-actions, presto,

          : priorities, product-id, protectbase, ps, remove-with-leaves,

          : rpm-warm-cache, security, show-leaves, subscription-manager,

          : tmprepo, tsflags, upgrade-helper, verify, versionlock

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Setting up Remove Process

Resolving Dependencies

--> Running transaction check

---> Package PackageKit.x86_64 0:0.5.8-21.el6 will be erased

--> Processing Dependency: PackageKit = 0.5.8-21.el6 for package: PackageKit-glib-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit >= 0.5.0 for package: gnome-packagekit-2.28.3-7.el6.x86_64

--> Running transaction check

---> Package PackageKit-glib.x86_64 0:0.5.8-21.el6 will be erased

--> Processing Dependency: libpackagekit-glib2.so.12()(64bit) for package: PackageKit-gstreamer-plugin-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-glib = 0.5.8-21.el6 for package: PackageKit-gstreamer-plugin-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-glib = 0.5.8-21.el6 for package: PackageKit-gtk-module-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-glib = 0.5.8-21.el6 for package: PackageKit-device-rebind-0.5.8-21.el6.x86_64

---> Package gnome-packagekit.x86_64 0:2.28.3-7.el6 will be erased

--> Running transaction check

---> Package PackageKit-device-rebind.x86_64 0:0.5.8-21.el6 will be erased

---> Package PackageKit-gstreamer-plugin.x86_64 0:0.5.8-21.el6 will be erased

---> Package PackageKit-gtk-module.x86_64 0:0.5.8-21.el6 will be erased

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package

  Arch   Version

              Repository                                               Size

================================================================================

Removing:

PackageKit

  x86_64 0.5.8-21.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 2.3 M

Removing for dependencies:

PackageKit-device-rebind

  x86_64 0.5.8-21.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 231 k

PackageKit-glib

  x86_64 0.5.8-21.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 734 k

PackageKit-gstreamer-plugin

  x86_64 0.5.8-21.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 232 k

PackageKit-gtk-module

  x86_64 0.5.8-21.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 231 k

gnome-packagekit

  x86_64 2.28.3-7.el6

              @anaconda-RedHatEnterpriseLinux-201311111358.x86_64/6.5 7.9 M

Transaction Summary

================================================================================

Remove 6 Package(s)

Installed size: 12 M

Is this ok [y/N]: y

Downloading Packages:

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Erasing : gnome-packagekit-2.28.3-7.el6.x86_64 1/6

Erasing : PackageKit-device-rebind-0.5.8-21.el6.x86_64 2/6

Erasing : PackageKit-gstreamer-plugin-0.5.8-21.el6.x86_64 3/6

Erasing : PackageKit-gtk-module-0.5.8-21.el6.x86_64 4/6

Erasing : PackageKit-0.5.8-21.el6.x86_64 5/6

Erasing : PackageKit-glib-0.5.8-21.el6.x86_64 6/6

Loading mirror speeds from cached hostfile

/* elrepo: elrepo.org

/* epel: mirrors.yun-idc.com

/* jpackage: mirror.ibcp.fr

/* rpmforge: mirror.oscc.org.my

/* rpmfusion-nonfree-updates: ftp.sjtu.edu.cn

246 packages excluded due to repository priority protections

0 packages excluded due to repository protections

Verifying : PackageKit-gstreamer-plugin-0.5.8-21.el6.x86_64 1/6

Verifying : gnome-packagekit-2.28.3-7.el6.x86_64 2/6

Verifying : PackageKit-device-rebind-0.5.8-21.el6.x86_64 3/6

Verifying : PackageKit-gtk-module-0.5.8-21.el6.x86_64 4/6

Verifying : PackageKit-0.5.8-21.el6.x86_64 5/6

Verifying : PackageKit-glib-0.5.8-21.el6.x86_64 6/6

Removed:

PackageKit.x86_64 0:0.5.8-21.el6

Dependency Removed:

PackageKit-device-rebind.x86_64 0:0.5.8-21.el6

PackageKit-glib.x86_64 0:0.5.8-21.el6

PackageKit-gstreamer-plugin.x86_64 0:0.5.8-21.el6

PackageKit-gtk-module.x86_64 0:0.5.8-21.el6

gnome-packagekit.x86_64 0:2.28.3-7.el6

Complete!

然后,执行yum install PackageKit

[root@redhat6 robin]/# yum install PackageKit

Loaded plugins: aliases, auto-update-debuginfo, changelog, dellsysid,

          : downloadonly, fastestmirror, filter-data, fs-snapshot, keys,

          : list-data, local, merge-conf, post-transaction-actions, presto,

          : priorities, product-id, protectbase, ps, remove-with-leaves,

          : rpm-warm-cache, security, show-leaves, subscription-manager,

          : tmprepo, tsflags, upgrade-helper, verify, versionlock

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile

/* elrepo: elrepo.org

/* epel: mirrors.yun-idc.com

/* jpackage: mirror.ibcp.fr

/* rpmforge: mirror.oscc.org.my

/* rpmfusion-nonfree-updates: ftp.sjtu.edu.cn

Skipping filters plugin, no data

246 packages excluded due to repository priority protections

0 packages excluded due to repository protections

Setting up Install Process

Resolving Dependencies

Skipping filters plugin, no data

--> Running transaction check

---> Package PackageKit.x86_64 0:0.5.8-21.el6 will be installed

--> Processing Dependency: PackageKit-yum-plugin = 0.5.8-21.el6 for package: PackageKit-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-yum = 0.5.8-21.el6 for package: PackageKit-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-gtk-module = 0.5.8-21.el6 for package: PackageKit-0.5.8-21.el6.x86_64

--> Processing Dependency: PackageKit-glib = 0.5.8-21.el6 for package: PackageKit-0.5.8-21.el6.x86_64

--> Processing Dependency: libpackagekit-glib2.so.12()(64bit) for package: PackageKit-0.5.8-21.el6.x86_64

--> Running transaction check

---> Package PackageKit-glib.x86_64 0:0.5.8-21.el6 will be installed

---> Package PackageKit-gtk-module.x86_64 0:0.5.8-21.el6 will be installed

---> Package PackageKit-yum.x86_64 0:0.5.8-21.el6 will be installed

---> Package PackageKit-yum-plugin.x86_64 0:0.5.8-21.el6 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package Arch Version Repository Size

================================================================================

Installing:

PackageKit x86_64 0.5.8-21.el6 base 526 k

Installing for dependencies:

PackageKit-glib x86_64 0.5.8-21.el6 base 221 k

PackageKit-gtk-module x86_64 0.5.8-21.el6 base 95 k

PackageKit-yum x86_64 0.5.8-21.el6 base 156 k

PackageKit-yum-plugin x86_64 0.5.8-21.el6 base 92 k

Transaction Summary

================================================================================

Install 5 Package(s)

Total download size: 1.1 M

Installed size: 3.9 M

Is this ok [y/N]: y

Downloading Packages:

Setting up and reading Presto delta metadata

Processing delta metadata

Package(s) data still to download: 1.1 M

(1/5): PackageKit-0.5.8-21.el6.x86_64.rpm | 526 kB 00:01

(2/5): PackageKit-glib-0.5.8-21.el6.x86_64.rpm | 221 kB 00:00

(3/5): PackageKit-gtk-module-0.5.8-21.el6.x86_64.rpm | 95 kB 00:01

(4/5): PackageKit-yum-0.5.8-21.el6.x86_64.rpm | 156 kB 00:00

(5/5): PackageKit-yum-plugin-0.5.8-21.el6.x86_64.rpm | 92 kB 00:00


Total 148 kB/s | 1.1 MB 00:07

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Installing : PackageKit-yum-0.5.8-21.el6.x86_64 1/5

Installing : PackageKit-gtk-module-0.5.8-21.el6.x86_64 2/5

Installing : PackageKit-glib-0.5.8-21.el6.x86_64 3/5

Installing : PackageKit-0.5.8-21.el6.x86_64 4/5

Installing : PackageKit-yum-plugin-0.5.8-21.el6.x86_64 5/5

Verifying : PackageKit-yum-plugin-0.5.8-21.el6.x86_64 1/5

Verifying : PackageKit-yum-0.5.8-21.el6.x86_64 2/5

Verifying : PackageKit-gtk-module-0.5.8-21.el6.x86_64 3/5

Verifying : PackageKit-glib-0.5.8-21.el6.x86_64 4/5

Verifying : PackageKit-0.5.8-21.el6.x86_64 5/5

Installed:

PackageKit.x86_64 0:0.5.8-21.el6

Dependency Installed:

PackageKit-glib.x86_64 0:0.5.8-21.el6

PackageKit-gtk-module.x86_64 0:0.5.8-21.el6

PackageKit-yum.x86_64 0:0.5.8-21.el6

PackageKit-yum-plugin.x86_64 0:0.5.8-21.el6

Complete!

之后,在执行:yum install gnome-packagekit

即完成替换了。

强制指定源更新firefox

Posted on

强制指定源更新firefox

Centos6.5下到firefox经常菜单假死,故使用另外的源更新。

使用remi源: /#/# Install Remi repository for RHEL/CentOS 6.5/6.4/6.3/6.2/6.1/6.0 /#/#/# wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm /# rpm -Uvh remi-release-6.rpm/#/# Check Availability of Firefox 26 in RHEL/CentOS 6.5/6.4/6.3/6.2/6.1/6.0 and Fedora 15/14 /#/#/# yum --enablerepo=remi list firefox/#/# Install or Update Firefox 26 in RHEL/CentOS 6.5/6.4/6.3/6.2/6.1/6.0 /#/#/# yum --enablerepo=remi install firefox

这是在未安装到情况下

更新firefox: /#/# Update Firefox 26 in RHEL/CentOS 6.5/6.4/6.3/6.2/6.1/6.0 /#/#/# yum --enablerepo=remi update firefox

redhat6 修改语言

Posted on

redhat6 修改语言

  1. 方法一:点击“系统”,选择“管理”,点击“语言”,选择自己需要更改的语言(如:English、中文(简体))等等,然后重新启动系统就可以切换语言了;
    1. 方法二:点击右键,点击“打开终端”,在终端中输入命令“system-config-language”,选择自己需要更改的语言(如:English、中文(简体))等等,然后重新启动系统就可以切换语言了; 3. 根据自己的实际情况进行操作即可。 如果system-config-language没有此命令到话,执行yum install system-config-language即可