搭建Kubernetes环境01

1、硬件网络环境
本次采用了云主机,搭建K8S环境
云主机最低配置要求为2Core,4G内存

节点名称 内网地址
master 172.16.172.101
node01 172.16.172.102
node02 172.16.172.103

2、配置k8s仓库,全部三个节点

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

3、安装必要的软件,全部三个节点

sudo apt-get update
sudo apt-get install -y docker.io kubelet kubeadm kubectl

4、初始化master节点
4.1、可以预先拉取镜像

kubeadm config images pull

4.2、初始化kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.525105 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1zdpa5.vmcsacag4wj3a0gv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
--discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

4.3、初始化master节点

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、两个工作节点加入

sudo kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
>    --discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、在master查看节点情况

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   15m   v1.16.2
node01   NotReady   <none>   10m   v1.16.2
node02   NotReady   <none>   10m   v1.16.2

Windows禁用蓝牙设备自动唤醒

蓝牙设备自动唤醒功能有时候很有用,有时候很鸡肋。
有一次把笔记本休眠,放到包里,结果被蓝牙鼠标重新唤醒了Windows,导致整个电脑暴热,还好电量耗尽
于是就想禁用蓝牙唤醒:

# 查询设备
powercfg -a
powercfg /devicequery wake_armed
powercfg /devicequery wake_programmable
# 禁用设备
powercfg /devicedisablewake "HID-compliant mouse (003)"
# 启用设备自动唤醒
powercfg /deviceenableawake "HID-compliant mouse (003)"

Windows平台上Chrome支持Custom Protocol

在Windows平台上,也可以通过Custom Protocol实现Chrome打开EXE进程,调用CS程序

1、通过注册表,实现单一用户支持Custom Protocol

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol]
"URL Protocol"=""
@="MyProtocol"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\DefaultIcon]
@="FULL_PATH_TO_MYAPP\\MYAPP.exe.exe, 1"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open\command]
@="\"FULL_PATH_TO_MYAPP\\MYAPP.exe\" \"%1\""

2、通过注册表,让全体用户支持Custom Protocol,需要管理员权限

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\Software\Classes\MyProtocol]
"URL Protocol"=""
@="MyProtocol"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\DefaultIcon]
@="FULL_PATH_TO_MYAPP\\MYAPP.exe.exe, 1"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open\command]
@="\"FULL_PATH_TO_MYAPP\\MYAPP.exe\" \"%1\""

Ubuntu18搭建CDH6环境03

1、确保cdt01可以ssh联通cdt02和cdt03

#这个userid与可以无密码使用sudo的userid相同
ssh -l userid cdh02
ssh -l userid cdh03

2、浏览器访问(以后都是界面了)
http://172.16.172.101:7180
用户名:admin
密码:admin

3、根据引导界面,新建Cluster
将172.16.172.101-172.16.172.103都安装好cloudera-manager-agent

4、根据引导界面,选用需要的软件进行安装
安装时,注意合理分配角色,也就是合理分配内存资源

5、依次安装
hdfs
zookeeper
hbase
yarn
hive
spark

6、安装完毕

PS:
1、如果出现找不到jdbc driver的情况

sudo apt-get install libmysql-java

Ubuntu18搭建CDH6环境02

1、cdt01安装

#添加cloudera仓库
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/archive.key
sudo apt-key add archive.key
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/cloudera-manager.list
sudo mv cloudera-manager.list /etc/apt/sources.list.d/

#更新软件清单
sudo apt-get update

#安装jdk8
sudo apt-get install openjdk-8-jdk

#安装cloudera
sudo apt-get install cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server

2、安装及配置mysql
2.1、安装mysql

sudo apt-get install mysql-server mysql-client libmysqlclient-dev libmysql-java

2.2、停止mysql

sudo service mysql stop

2.3、删除不需要的文件

sudo rm /var/lib/mysql/ib_logfile0
sudo rm /var/lib/mysql/ib_logfile1

2.4、修改配置文件

sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf

#修改或添加以下信息
[mysqld]
transaction-isolation = READ-COMMITTED
max_allowed_packet = 32M
max_connections = 300
innodb_flush_method = O_DIRECT

2.5、启动mysql

sudo service mysql start

2.6、初始化mysql

sudo mysql_secure_installation

3、创建数据库并授权

sudo mysql -uroot -p
-- 创建数据库
-- Cloudera Manager Server
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Activity Monitor
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Reports Manager
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hue
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hive Metastore Server
CREATE DATABASE hive DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Sentry Server
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Audit Server
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Metadata Server
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Oozie
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

#创建用户并授权
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'scm123456';
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'amon123456';
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'rman123456';
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'hue123456';
GRANT ALL ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive123456';
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry123456';
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'nav123456';
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'navms123456';
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'oozie123456';

4、初始化数据库

sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm scm123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql amon amon amon123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql rman rman rman123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hue hue hue123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hive hive hive123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql sentry sentry sentry123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql nav nav nav123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql navms navms navms123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql oozie oozie oozie123456

5、启动

#启动cloudera-scm-server
sudo systemctl start cloudera-scm-server

#查看启动日志,等待Jetty启动完成
sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

6、启动
浏览器访问
http://172.16.172.101:7180
用户名:admin
密码:admin

Ubuntu18搭建CDH6环境01

1、环境准备

VirtualBox 6
Ubuntu 18
Cloudera CDH 6.3

2、虚拟机安装Ubuntu18,配置为
1CPU
4G内存
300G硬盘
两块网卡,一块为HostOnly,一块为NAT

3、将虚拟机克隆为三份
如果是手工拷贝,记得修改硬盘UUID、虚拟机UUID、网卡硬件ID

4、设置IP地址、hostname及hosts文件

机器名 HostOnly IP
cdh01 172.16.172.101
cdh02 172.16.172.102
cdh03 172.16.172.103

5、允许无密码使用sudo,至少修改cdh02和cdh03

#edit /etc/sudoers
userid ALL=(ALL:ALL) NOPASSWD: ALL

使用JDK动态代理时为何必须实现至少一个接口

这个问题,就要去看一下OpenJDK的源码了:

//在Proxy类里中:
//constructorParams的定义如下:
private static final Class<?>[] constructorParams = { InvocationHandler.class };

//newProxyInstance无限精简之后就是:
public static Object newProxyInstance(ClassLoader loader, Class<?>[] interfaces, InvocationHandler h)
        throws IllegalArgumentException {
    //通过ProxyClassFactory调用ProxyGenerator生成了代理类
    Class<?> cl = getProxyClass0(loader, interfaces);
    //找到参数为InvocationHandler.class的构造函数
    final Constructor<?> cons = cl.getConstructor(constructorParams);
    //创建代理类实例
    return cons.newInstance(new Object[]{h});
}

//在ProxyGenerator类中:
public static byte[] generateProxyClass(final String name,Class<?>[] interfaces, int accessFlags)){}
private byte[] generateClassFile() {}

//上面两个方法,做的就是:
//将接口全部进行代理
//并生成其他需要的方法,比如上面用到的构造函数、toString、equals、hashCode等
//生成对应的字节码
//其实这也就说明了,为何JDK的动态代理,必须需要至少一个接口

Jetty源码分析01

一、Jetty的ScopedHandler的doStart方法,最后一步是将线程私有变量__outerScope设置成null,为什么需要这样做呢?

protected void doStart() throws Exception
{
    try{
        _outerScope=__outerScope.get();
        if (_outerScope==null){
           //本次处理的第一个scope handler
           //告知后续scope handler,_outerScope选我
            __outerScope.set(this);
        }
        super.doStart();
        _nextScope= getChildHandlerByClass(ScopedHandler.class);
    }
    finally
    {
        if (_outerScope==null){
           //本次处理结束
           //为了下次同一个线程处理是,
           //还能正常的设置第一个scope handler
           //必须把threadlocal变量设为null
            __outerScope.set(null);
        }
    }
}

二、Jetty中,ScopedHandler中nextHandle调用顺序是如何的?

//此外,这一节里有一个non scoped handler X,一开始没太看懂调阅顺序。
//后来发现是这样的:
public final void nextHandle(String target...)...
{
    if (_nextScope!=null && _nextScope==_handler){
        //上面条件可以保证下一个handler是scope handler
        _nextScope.doHandle(target,baseRequest,request, response);
    }
    else if (_handler!=null){
        //non scpoe handler调用下一个handler的
        super.handle(target,baseRequest,request,response);
    }
}

感觉类成员的命名不太合适,
比如__outerScope和_outerScope
比如_handler其实一直指向的是下一个handler,是不是该为_nextHandler更好一些?

通过ADB删除小米电视安装应用

小米电视越来越慢,就想多删除一些应用:
注意:预装应用删除后,除了升级系统,并没有好的恢复方法,使用命令需谨慎

#首先在小米电视上打开远程调试,让ADB可以远程连接
开启开发者模式:设置-》关于-》产品型号-》遥控连续按5下OK键
设备安全设置:设置-》账号与安全-》允许安装未知来源的应用-》允许ADB调试
查看网络IP:设置-》网络-》找到电视使用的局域网IP

#远程连接小米电视
adb connect IP

#显示已连接设备
adb devices

#安装APK包
adb install -r PATH_TO_APK/MyAPK.apk

#列出已安装的包
adb shell pm list packages

#删除具体应用(不同版本包名有差异)
应用商店
adb shell pm uninstall com.xiaomi.mitv.appstore
小米商城
adb shell pm uninstall com.xiaomi.mitv.shop
小米支付
adb shell pm uninstall com.xiaomi.mitv.payment
小米钱包
adb shell pm uninstall com.mipay.wallet.tv
天气
adb shell pm uninstall com.xiaomi.tweather
时尚画报
adb shell pm uninstall com.xiaomi.tv.gallery
相册
adb shell pm uninstall  com.mitv.gallery
照片屏幕保护程序
adb shell pm uninstall  com.android.dreams.phototable
游戏中心
adb shell pm uninstall com.xiaomi.mibox.gamecenter
日历
adb shell pm uninstall com.xiaomi.mitv.calendar
提醒
adb shell pm uninstall com.mitv.alarmcenter
用户手册
adb shell pm uninstall com.xiaomi.mitv.handbook
热点新闻
adb shell pm uninstall com.duokan.videodaily
新闻
adb shell pm uninstall  com.xiaomi.mitvnews
米家
adb shell pm uninstall com.xiaomi.smarthome.tv
小米盒子设置
adb shell pm uninstall com.xiaomi.mitv.settings

#重启设备
adb shell reboot
adb shell reboot -p