`
keren
  • 浏览: 1559646 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

我的HBase之旅

阅读更多
最近在学hbase,看了好多资料,具体的参考:
http://blog.chinaunix.net/u3/102568/article_144792.html
http://blog.csdn.net/dajuezhao/category/724896.aspx
http://www.javabloger.com/article/apache-hbase-shell-and-install-key-value.html
以上三个里面的所有资料都看了,相信你就知道一定的hbase概念了。
好了,现在讲讲我的配置环境:
cygwin + hadoop-0.20.2 + zookeeper-3.3.2 + hbase-0.20.6 (+ eclipse3.6)
具体的配置细节,这里不讲了,网上很多,只要细心就没问题。
假设都配置好了,那么启动这些服务吧,据说启动顺序也是有要求的:
1,hadoop  ./start-all.sh
2,zookeeper ./zkServer.sh start
3,hbase ./start-hbase.sh
停止的时候也是有顺序的, hbase--zookeeper--hadoop

成功后的界面截图:
http://localhost:60010/master.jsp 【hbase的管理信息】



下面就写java代码来操作hbase,我写了简单的增删改查:
package org.test;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Scanner;
import org.apache.hadoop.hbase.io.BatchUpdate;
import org.apache.hadoop.hbase.io.Cell;
import org.apache.hadoop.hbase.io.RowResult;
import org.apache.hadoop.hbase.util.Bytes;

public class TestHBase {
	public final static String COLENDCHAR = String.valueOf(KeyValue.COLUMN_FAMILY_DELIMITER);//":"
	final String key_colName = "colN";
	final String key_colCluster = "colClut";
	final String key_colDataType = "colDT";
	final String key_colVal = "colV";
	//hbase的环境变量
	HBaseConfiguration conf;
	HBaseAdmin admin = null;
	/**
	 * @param args
	 */
	public static void main(String[] args) {
		// TODO Auto-generated method stub
		TestHBase app = new TestHBase();
		
		//app.test();
		
		app.init();
		app.go();
		app.list();
	}
	
	void list(){
		try {
			String tableName = "htcjd0";
			Map rsMap = this.getHTData(tableName);
			System.out.println(rsMap.toString());
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	void go(){
		try {
			//建表
			String tableName = "htcjd0";
			String[] columns = new String[]{"col"};
			this.createHTable(tableName, columns);
			//插入数据
			List list = new ArrayList();
			List rowList = null;
			Map rowMap = null;
			for (int i = 0; i < 10; i++) {
				rowList = new ArrayList();
				
				rowMap = new HashMap();
				rowMap.put(key_colName, "col");
				//rowMap.put(key_colCluster, "cl_name");
				rowMap.put(key_colVal, "陈杰堆nocluster"+i);
				rowList.add(rowMap);
				
				rowMap = new HashMap();
				rowMap.put(key_colName, "col");
				rowMap.put(key_colCluster, "cl_name");
				rowMap.put(key_colVal, "陈杰堆cl_"+i);
				rowList.add(rowMap);
				
				rowMap = new HashMap();
				rowMap.put(key_colName, "col");
				rowMap.put(key_colCluster, "cl_age");
				rowMap.put(key_colVal, "cl_"+i);
				rowList.add(rowMap);
				
				rowMap = new HashMap();
				rowMap.put(key_colName, "col");
				rowMap.put(key_colCluster, "cl_sex");
				rowMap.put(key_colVal, "列cl_"+i);
				rowList.add(rowMap);
				
				list.add(rowList);
			}
			HTable hTable = this.getHTable(tableName);
			this.insertRow(hTable, list);
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	void go0(){
		try {
			//建表
			String tableName = "htcjd";
			String[] columns = new String[]{"name","age","col"};
			this.createHTable(tableName, columns);
			//插入数据
			List list = new ArrayList();
			List rowList = null;
			Map rowMap = null;
			for (int i = 0; i < 10; i++) {
				rowList = new ArrayList();
				rowMap = new HashMap();
				rowMap.put(key_colName, "name");
				rowMap.put(key_colVal, "测试hbase"+i);
				
				rowMap.put(key_colName, "age");
				rowMap.put(key_colVal, ""+i);
				
				rowMap.put(key_colName, "col");
				rowMap.put(key_colVal, "列"+i);
				
				rowList.add(rowMap);
				
				list.add(rowList);
			}
			HTable hTable = this.getHTable(tableName);
			this.insertRow(hTable, list);
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	void init() {
		try {
			Configuration HBASE_CONFIG = new Configuration();
			HBASE_CONFIG.set("hbase.zookeeper.quorum", "127.0.0.1");
			HBASE_CONFIG.set("hbase.zookeeper.property.clientPort", "2181");
			this.conf = new HBaseConfiguration(HBASE_CONFIG);// new HBaseConfiguration();
			this.admin = new HBaseAdmin(conf);
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	
	/**
	 * 创建表的描述
	 * @param tableName
	 * @return
	 * @throws Exception
	 */
	HTableDescriptor createHTDesc(final String tableName)throws Exception{
		try {
			return new HTableDescriptor(tableName);
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 针对hbase的列的特殊情况进行处理,列的情况: course: or course:math,
	 * 就是要么带列族,要么不带列族(以冒号结尾)
	 * @param colName 列
	 * @param cluster 列族
	 * @return
	 */
	String fixColName(String colName,String cluster){
		if(cluster!=null&&cluster.trim().length()>0&&colName.endsWith(cluster)){
			return colName;
		}
		String tmp = colName;
		int index = colName.indexOf(COLENDCHAR);
		//int leng = colName.length();
		if(index == -1){
			tmp += COLENDCHAR;
		}
		//直接加入列族
		if(cluster!=null&&cluster.trim().length()>0){
			tmp += cluster;
		}
		return tmp;		
	}
	String fixColName(String colName){
		return this.fixColName(colName, null);
	}
	
	/**
	 * 创建列的描述,添加后,该列会有一个冒号的后缀,用于存储(列)族,
	 * 将来如果需要扩展,那么就在该列后加入(列)族
	 * @param colName
	 * @return
	 * @throws Exception
	 */
	HColumnDescriptor createHCDesc(String colName)throws Exception{
		try {
			String tmp = this.fixColName(colName);
			byte[] colNameByte = Bytes.toBytes(tmp);
			return new HColumnDescriptor(colNameByte);
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 给表添加列,此时不带列族
	 * @param htdesc
	 * @param colName
	 * @param readonly
	 * @throws Exception
	 */
	void addFamily(HTableDescriptor htdesc,String colName,final boolean readonly)throws Exception{
		try {
			htdesc.addFamily(this.createHCDesc(colName));
			htdesc.setReadOnly(readonly);
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 删除列--不带列族
	 * @param tableName
	 * @param colName
	 * @throws Exception
	 */
	void removeFamily(String tableName,String colName)throws Exception{
		try {
			String tmp = this.fixColName(colName);
			this.admin.deleteColumn(tableName, tmp);
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 删除列--带列族
	 * @param tableName
	 * @param colName
	 * @param cluster
	 * @throws Exception
	 */
	void removeFamily(String tableName,String colName,String cluster)throws Exception{
		try {
			String tmp = this.fixColName(colName,cluster);
			this.admin.deleteColumn(tableName, tmp);
		} catch (Exception e) {
			throw e;
		}
	}
	/**
	 * 建表
	 * @param tableName
	 * @param columns
	 * @throws Exception
	 */
	void createHTable(String tableName)throws Exception{
		try {
			if(admin.tableExists(tableName))return;//判断表是否已经存在
			HTableDescriptor htdesc = this.createHTDesc(tableName);
			admin.createTable(htdesc);
		} catch (Exception e) {
			throw e;
		}
	}
	void createHTable(String tableName,String[] columns)throws Exception{
		try {
			if(admin.tableExists(tableName))return;//判断表是否已经存在
			HTableDescriptor htdesc = this.createHTDesc(tableName);
			for (int i = 0; i < columns.length; i++) {
				String colName = columns[i];
				this.addFamily(htdesc, colName, false);
			}
			admin.createTable(htdesc);
		} catch (Exception e) {
			throw e;
		}
	}
	/**
	 * 删除表
	 * @param tableName
	 * @throws Exception
	 */
	void removeHTable(String tableName)throws Exception{
		try {
			admin.disableTable(tableName);//使无效
			admin.deleteTable(tableName);//再删除
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 取得某个表
	 * @param tableName
	 * @return
	 * @throws Exception
	 */
	HTable getHTable(String tableName)throws Exception{
		try {
			return new HTable(conf, tableName);
		} catch (Exception e) {
			throw e;
		}
	}
	
	void updateColumn(String tableName,String rowID,String colName,String cluster,String value)throws Exception{
		try {
			BatchUpdate batchUpdate = new BatchUpdate(rowID);
			String tmp = this.fixColName(colName, cluster);
			batchUpdate.put(tmp, Bytes.toBytes(value));
			
			HTable hTable = this.getHTable(tableName);
			hTable.commit(batchUpdate); 
		} catch (Exception e) {
			throw e;
		}
	}
	
	void updateColumn(String tableName,String rowID,String colName,String value)throws Exception{
		try {
			this.updateColumn(tableName, rowID, colName, null, value); 
		} catch (Exception e) {
			throw e;
		}
	}
	
	void deleteColumn(String tableName,String rowID,String colName,String cluster)throws Exception{
		try {
			BatchUpdate batchUpdate = new BatchUpdate(rowID);
			String tmp = this.fixColName(colName, cluster);
			batchUpdate.delete(tmp);
			HTable hTable = this.getHTable(tableName);
			hTable.commit(batchUpdate); 
		} catch (Exception e) {
			throw e;
		}
	}
	
	void deleteColumn(String tableName,String rowID,String colName)throws Exception{
		try {
			this.deleteColumn(tableName, rowID, colName, null); 
		} catch (Exception e) {
			throw e;
		}
	}
	/**
	 * 取得某一行,某一列的值
	 * @param tableName
	 * @param rowID
	 * @param colName
	 * @param cluster
	 * @return
	 * @throws Exception
	 */
	String getColumnValue(String tableName,String rowID,String colName,String cluster)throws Exception{
		try {
			String tmp = this.fixColName(colName, cluster);
			HTable hTable = this.getHTable(tableName);
			Cell cell = hTable.get(rowID, tmp);
			if(cell==null)return null;
			return new String(cell.getValue());
		} catch (Exception e) {
			throw e;
		}
	}
	
	/**
	 * 取得所属列的值
	 * @param tableName
	 * @param colName
	 * @param cluster 如果该参数为空,那么返回所有列族的值
	 * @return
	 * @throws Exception
	 */
	Map getColumnValue(String tableName, String colName, String cluster)throws Exception {
		Scanner scanner = null;
		try {
			String tmp = this.fixColName(colName, cluster);
			HTable hTable = this.getHTable(tableName);
			scanner = hTable.getScanner(new String[] { tmp });// "myColumnFamily:columnQualifier1"
			RowResult rowResult = scanner.next();
			Map resultMap = new HashMap();
			String row, value;
			Cell cell = null;
			while (rowResult != null) {
				// print out the row we found and the columns we were looking
				// for
				// System.out.println("Found row: "
				// + new String(rowResult.getRow())
				// + " with value: "
				// + rowResult.get("myColumnFamily:columnQualifier1"
				// .getBytes()));
				row = new String(rowResult.getRow());
				cell = rowResult.get(Bytes.toBytes(tmp));
				if (cell == null) {
					resultMap.put(row, null);
				} else {
					resultMap.put(row, cell.getValue());
				}
				rowResult = scanner.next();
			}
			
			return resultMap;
		} catch (Exception e) {
			throw e;
		}finally{
			if(scanner!=null){
				scanner.close();//一定要关闭
			}
		}
	}
	
	/**
	 * 取得所有数据
	 * @param tableName
	 * @return Map
	 * @throws Exception
	 */
	public Map getHTData(String tableName) throws Exception {
		ResultScanner rs = null;
		try {
			HTable table = new HTable(this.conf, tableName);
			Scan s = new Scan();
			rs = table.getScanner(s);
			Map resultMap = new HashMap();
			for (Result r : rs) {
				for (KeyValue kv : r.raw()) {
					resultMap.put(new String(kv.getColumn()),
							new String(kv.getValue()));
				}
			}
			return resultMap;
		} catch (Exception e) {
			throw e;
		} finally {
			if (rs != null)
				rs.close();
		}
	}
	
	//插入记录
	void insertRow(HTable table,List dataList)throws Exception{
		try {
			Put put = null;
			String colName = null;
			String colCluster = null;
			String colDataType = null;
			byte[] value;
			List rowDataList = null;
			Map rowDataMap = null;
			for (Iterator iterator = dataList.iterator(); iterator.hasNext();) {
				rowDataList = (List) iterator.next();
				for(int i=0;i<rowDataList.size();i++){
					rowDataMap = (Map) rowDataList.get(i);
					colName = (String)rowDataMap.get(key_colName);
					colCluster = (String)rowDataMap.get(key_colCluster);
					colDataType = (String)rowDataMap.get(key_colDataType);
					Object val = rowDataMap.get(key_colVal);
					value = Bytes.toBytes(String.valueOf(val));
//					//根据数据类型来处理
//					if("string".equalsIgnoreCase(colDataType)){
//						value = Bytes.toBytes((String)val);
//					}else if("int".equalsIgnoreCase(colDataType)){
//						value = Bytes.toInt(Integer.parseInt(String.valueOf(val)));
//					}else if("float".equalsIgnoreCase(colDataType)){
//						value = Bytes.toBytes(Float.parseFloat(String.valueOf(val)));
//					}else if("long".equalsIgnoreCase(colDataType)){
//						value = Bytes.toBytes(Long.parseLong(String.valueOf(val)));
//					}else if("double".equalsIgnoreCase(colDataType)){
//						value = Bytes.toBytes(Double.parseDouble(String.valueOf(val)));
//					}else if("char".equalsIgnoreCase(colDataType)){
//						value = Bytes.toBytes((String.valueOf(val)));
//					}else if("byte".equalsIgnoreCase(colDataType)){
//						value = Bytes.totoBytes((byte[])val);
//					}
					put = new Put(value);
					String tmp = this.fixColName(colName, colCluster);
					byte[] colNameByte = Bytes.toBytes(tmp);
					byte[][] famAndQf = KeyValue.parseColumn(colNameByte);
					put.add(famAndQf[0], famAndQf[1], value);
					table.put(put);
				}
			}
		} catch (Exception e) {
			throw e;
		}
	}
	//取得表的结构信息

}

然后在eclipse里面运行,可以看到结果:

[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:host.name=chenjiedui
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.version=1.6.0_05
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.vendor=Sun Microsystems Inc.
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.home=D:\jdk1.6.0_05\jre
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.class.path=D:\workspace\MyHadoopApp\bin;D:\workspace\MyHadoopApp\lib\commons-lang-2.4.jar;D:\workspace\MyHadoopApp\lib\commons-logging-1.1.1.jar;D:\workspace\Hadoop0.20.2\bin;D:\workspace\Hadoop0.20.2\lib\commons-cli-1.2.jar;D:\workspace\Hadoop0.20.2\lib\commons-codec-1.3.jar;D:\workspace\Hadoop0.20.2\lib\commons-el-1.0.jar;D:\workspace\Hadoop0.20.2\lib\commons-httpclient-3.0.1.jar;D:\workspace\Hadoop0.20.2\lib\commons-logging-1.0.4.jar;D:\workspace\Hadoop0.20.2\lib\commons-logging-api-1.0.4.jar;D:\workspace\Hadoop0.20.2\lib\commons-net-1.4.1.jar;D:\workspace\Hadoop0.20.2\lib\core-3.1.1.jar;D:\workspace\Hadoop0.20.2\lib\hsqldb-1.8.0.10.jar;D:\workspace\Hadoop0.20.2\lib\jasper-compiler-5.5.12.jar;D:\workspace\Hadoop0.20.2\lib\jasper-runtime-5.5.12.jar;D:\workspace\Hadoop0.20.2\lib\jets3t-0.6.1.jar;D:\workspace\Hadoop0.20.2\lib\jetty-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\jetty-util-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\junit-3.8.1.jar;D:\workspace\Hadoop0.20.2\lib\kfs-0.2.2.jar;D:\workspace\Hadoop0.20.2\lib\log4j-1.2.15.jar;D:\workspace\Hadoop0.20.2\lib\mockito-all-1.8.0.jar;D:\workspace\Hadoop0.20.2\lib\oro-2.0.8.jar;D:\workspace\Hadoop0.20.2\lib\servlet-api-2.5-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\slf4j-api-1.4.3.jar;D:\workspace\Hadoop0.20.2\lib\slf4j-log4j12-1.4.3.jar;D:\workspace\Hadoop0.20.2\lib\xmlenc-0.52.jar;D:\workspace\Hadoop0.20.2\lib\ant.jar;D:\workspace\Hadoop0.20.2\lib\jsp-2.1.jar;D:\workspace\Hadoop0.20.2\lib\jsp-api-2.1.jar;D:\workspace\Hadoop0.20.2\lib\ftplet-api-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\ftpserver-core-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\ftpserver-server-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\libthrift.jar;D:\workspace\Hadoop0.20.2\lib\mina-core-2.0.0-M2-20080407.124109-12.jar;D:\workspace\Hadoop0.20.2\libs\lucene\lucene-core-3.0.1.jar;D:\workspace\HBase0.20.6\bin;D:\workspace\HBase0.20.6\lib\commons-cli-2.0-SNAPSHOT.jar;D:\workspace\HBase0.20.6\lib\commons-el-from-jetty-5.1.4.jar;D:\workspace\HBase0.20.6\lib\commons-httpclient-3.0.1.jar;D:\workspace\HBase0.20.6\lib\commons-logging-1.0.4.jar;D:\workspace\HBase0.20.6\lib\commons-logging-api-1.0.4.jar;D:\workspace\HBase0.20.6\lib\commons-math-1.1.jar;D:\workspace\HBase0.20.6\lib\hadoop-0.20.2-core.jar;D:\workspace\HBase0.20.6\lib\jasper-compiler-5.5.12.jar;D:\workspace\HBase0.20.6\lib\jasper-runtime-5.5.12.jar;D:\workspace\HBase0.20.6\lib\jetty-6.1.14.jar;D:\workspace\HBase0.20.6\lib\jetty-util-6.1.14.jar;D:\workspace\HBase0.20.6\lib\jruby-complete-1.2.0.jar;D:\workspace\HBase0.20.6\lib\junit-4.8.1.jar;D:\workspace\HBase0.20.6\lib\libthrift-r771587.jar;D:\workspace\HBase0.20.6\lib\log4j-1.2.15.jar;D:\workspace\HBase0.20.6\lib\lucene-core-2.2.0.jar;D:\workspace\HBase0.20.6\lib\servlet-api-2.5-6.1.14.jar;D:\workspace\HBase0.20.6\lib\xmlenc-0.52.jar;D:\workspace\HBase0.20.6\lib\zookeeper-3.3.2.jar;D:\workspace\MyHadoopApp\lib\commons-cli-2.0-SNAPSHOT.jar;D:\workspace\MyHadoopApp\lib\log4j-1.2.15.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-el-from-jetty-5.1.4.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-httpclient-3.0.1.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-logging-api-1.0.4.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-math-1.1.jar;D:\workspace\MyHadoopApp\lib\hbase\jasper-compiler-5.5.12.jar;D:\workspace\MyHadoopApp\lib\hbase\jasper-runtime-5.5.12.jar;D:\workspace\MyHadoopApp\lib\hbase\jetty-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\jetty-util-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\jruby-complete-1.2.0.jar;D:\workspace\MyHadoopApp\lib\hbase\libthrift-r771587.jar;D:\workspace\MyHadoopApp\lib\hbase\lucene-core-2.2.0.jar;D:\workspace\MyHadoopApp\lib\hbase\servlet-api-2.5-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\xmlenc-0.52.jar;D:\workspace\MyHadoopApp\lib\hbase\zookeeper-3.3.2.jar
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.library.path=D:\jdk1.6.0_05\bin;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;d:/jdk1.6.0_05/bin/../jre/bin/client;d:/jdk1.6.0_05/bin/../jre/bin;d:/jdk1.6.0_05/bin/../jre/lib/i386;D:\cygwin\bin;D:\cygwin\usr\sbin;d:\oracle\product\10.2.0\db_1\bin;d:\jdk1.6.0_05\bin;D:\apache-ant-1.8.0RC1\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\ThinkPad\ConnectUtilities;C:\Program Files\Common Files\Lenovo;d:\Program Files\cvsnt;C:\Program Files\Common Files\Thunder Network\KanKan\Codecs;C:\Program Files\Common Files\TTKN\Bin;C:\Program Files\StormII\Codec;C:\Program Files\StormII
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.io.tmpdir=C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.compiler=<NA>
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.name=Windows XP
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.arch=x86
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.version=5.1
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.name=Administrator
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.home=C:\Documents and Settings\Administrator
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.dir=D:\workspace\MyHadoopApp
[hadoop] INFO [main] ZooKeeper.<init>(373) | Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=60000 watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@cd2c3c
[hadoop] INFO [main-SendThread()] ClientCnxn.startConnect(1041) | Opening socket connection to server /127.0.0.1:2181
[hadoop] INFO [main-SendThread(localhost:2181)] ClientCnxn.primeConnection(949) | Socket connection established to localhost/127.0.0.1:2181, initiating session
[hadoop] INFO [main-SendThread(localhost:2181)] ClientCnxn.readConnectResult(738) | Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x12c6c8d8f6d0010, negotiated timeout = 40000
输出结果:
{col:cl_name=陈杰堆cl_9, col:name=陈杰堆9, col:sex=列9, col:cl_age=cl_9, col:cl_sex=列cl_9, col:age=9, col:=陈杰堆nocluster9}
[hadoop] INFO [HCM.shutdownHook] ZooKeeper.close(538) | Session: 0x12c6c8d8f6d0010 closed
[hadoop] INFO [main-EventThread] ClientCnxn.run(520) | EventThread shut down

  • 大小: 164.3 KB
  • 大小: 150.2 KB
分享到:
评论
1 楼 chenjingbo 2010-12-20  
参考了.感谢楼主

相关推荐

    细细品味Hadoop_Hadoop集群(第11期副刊)_HBase之旅.pdf

    细细品味Hadoop_Hadoop集群(第11期副刊)_HBase之旅.pdf

    大数据与云计算培训学习资料 Hadoop集群 细细品味Hadoop_第11期副刊_HBase之旅_V1.0 共23页.pdf

    大数据与云计算培训学习资料 Hadoop集群 细细品味Hadoop_第11期副刊_HBase之旅_V1.0 共23页.pdf

    一条数据的HBase之旅,简明HBase入门教程-Write全流程

    在下面的流程图中,我们使用下面这样一个红色小图标来表示该数据所在的位置:数据位置标记HBase中提供了如下几种主要的接口:HBase的基础API,应用最为广泛。基于Shell的命令行操作接口,基于JavaClientAPI实现。...

    Hbase完全分布式集群搭建(详细+,看完就会,).docx

    记录我的学习之旅,每份文档倾心倾力,带我成我大牛,回头观望满脸笑意,望大家多多给予意见,有问题或错误,请联系 我将及时改正;借鉴文章标明出处,谢谢

    Hadoop集群(1-11期)

    Hadoop集群·CentOS安装配置(第1期) Hadoop集群·机器信息分布表(第2期) Hadoop集群·VSFTP安装配置(第3期) Hadoop集群·SecureCRT使用(第4期) Hadoop集群·Hadoop安装...Hadoop集群·HBase之旅(第11期副刊)

    大数据时代的结构化存储—HBase在阿里的应用实践

    这一年,Hadoop的好兄弟HBase由毕玄大师带入淘宝,开启了它的阿里之旅。从最初的淘宝历史交易记录,到去年的支付宝消费记录存储在线历史存储统一;从蚂蚁安全风控的多年存储演进,到HBase、TT、Galaxy的大数据激情...

    细细品味Hadoop集群11-15

    特别好的Hadoop教程,基本上等于手把手教了,每...(第11期副刊)_HBase之旅 (第12期)_HBase应用开发 (第12期副刊)_HBase性能优化 (第13期)_Hive简介及安装 (第14期)_Hive应用开发 (第15期)_HBase、Hive与RDBMS

    DCon2015 中国大数据技术嘉年华—PPT资料

    李扬《Apache Kylin 深入之旅 - Streaming及Plugin 架构探秘》DCon2015交流专贴 何亮亮《HBase服务化实践》DCon2015交流专贴 梁堰波《Spark MLlib在金融行业的应用》DCon2015交流专贴 傅志华《大数据在互联网企业中...

    云栖精选9月刊云端数据库未来发展趋势机遇与挑战并存.pdf

    本期《云栖精选》以“云数据库”为封面选题,精选了阿里云对于云数据库产品和架构设计背后的考量,并且对于阿里云新一代...此外,也从数据库在线技术峰会中精选了系列技术文章,就让我们一起走上数据库技术进阶之旅。

    java+大数据.pdf

    2)⽀付宝订单系统(贷款部分)⼤数据分析 (15天) 3)阿⾥飞猪旅游⽹⼤数据分析(包含智能推荐部分)(30天) 具体的分模块知识点:模块知识是项⽬的基础,希望掌握了以下单个的模块之后再开启项⽬之旅 ...

    java+大数据(1).pdf

    2)⽀付宝订单系统(贷款部分)⼤数据分析 (15天) 3)阿⾥飞猪旅游⽹⼤数据分析(包含智能推荐部分)(30天) 具体的分模块知识点:模块知识是项⽬的基础,希望掌握了以下单个的模块之后再开启项⽬之旅 ...

    Hadoop权威指南 第二版(中文版)

     2.2.1 数据模型的“旋风之旅”  2.2.2 实现  2.3 安装  2.3.1 测试驱动  2.4 客户机  2.4.1 Java  2.4.2 Avro,REST,以及Thrift  2.5 示例  2.5.1 模式  2.5.2 加载数据  2.5.3 Web查询  2.6 HBase和...

    Hadoop权威指南(中文版)2015上传.rar

    2.2.1 数据模型的"旋风之旅" 2.2.2 实现 2.3 安装 2.3.1 测试驱动 2.4 客户机 2.4.1 Java 2.4.2 Avro,REST,以及Thrift 2.5 示例 2.5.1 模式 2.5.2 加载数据 2.5.3 Web查询 2.6 HBase和RDBMS的比较 2.6.1 成功的...

    matlab爱心代码总结-CV:我的简历

    Server、MySQL、MongoDB、HBase; 设计:UML,设计模式; 云:Azure、阿里云、TCYun; 前端:JavaScript、JQuery、HTML、CSS、VUE.js。 关于 10年以上软件开发经验,5年以上团队管理经验,资深IT专家,喜欢阅读和...

    java核心知识点整理

    包含JVM,java集合,多线程并发,java基础,spring原理,微服务,netter和rpc,计算机网络,日志,kafka,rabbitmq,hbase,mongodb,cassandra,设计模式,负载均衡,数据库,一致性算法,java算法,数据结构,加密...

Global site tag (gtag.js) - Google Analytics