搭建Kubernetes环境02

上一节我们搭建了环境,这一节我们部署一些k8s插件。官方插件清单如下:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

本次我们部署两个插件:calico和dashboard

1、由于资源比较少,我们让master也可以进行部署

kubectl taint nodes --all node-role.kubernetes.io/master-

2、部署calico
2.1、部署

kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

2.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3、部署dashboard
3.1、部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

3.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3.3、启动代理

kubectl proxy

3.4、浏览器可以看到登录页面

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

但其实这个地方有个坑,因为dashboard要求https登录,而代理当前为http,所以只有IP为localhost时,才能登录成功。在这里浪费了不少时间。

3.5、新建用户

vi neohope-account.yaml

#文件内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: neohope
namespace: kube-system

kubectl create -f neohope-account.yaml

3.6、用户角色配置

vi neohope-role.yaml

#文件内容
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: neohope
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: neohope
namespace: kube-system

kubectl create -f  neohope-role.yaml

3.7、获取Token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep neohope | awk '{print $1}')
Name:         neohope-token-2khbb
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: neohope
kubernetes.io/service-account.uid: fc842f0e-0ef4-4c41-9f30-8a5409c866c2</none>

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImtIRjFiZnI5V3NiRlpZQXpzUk9DanA4cHBCQnFOcFNrek5xTjltUGRLeDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuZW9ob3BlLXRva2VuLTJraGJiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im5lb2hvcGUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzg0MmYwZS0wZWY0LTRjNDEtOWYzMC04YTU0MDljODY2YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bmVvaG9wZSJ9.Zsk4C3hs58JmLa0rRTdfoQKlY524kMtnlzEHxgMryv7u9kPHS51BA0xiVC1nMLDcbMp1U3YHlnz0-IJkFzVeaboq0qEFea56nnqASMSEtCB1c7IE52zip-4tDWdZ-jYwf7KN5Gwq_4ZUqa4gRf1znVH7nlsxTpaoQ_-yjJsQpqDyUb1BLgGrUGcWOF2hGMHrNPHbZfLyhsPp_ijOvmRviAq57nyrGYiVG9ZiMoGV_1Y5rvn2-L0BHCdgZjSzK6nlfvoMlpnqhQXkrxE0d9EJbeukfx5sOF3xUPkQx-6dKm3QrkqBNXInbDxJXJbj27JalGarpRDA9tsPg1mUqAb-7g

3.8、如果是localhost登录,用上面的Token就可以了

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

3.9、如果不是localhost登录,有三种方式

#A、暴露端口
#B、通过api server进行代理访问
#https://IP:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
#C、通过插件,用nginx等代理后访问
#为了偷懒,用方案A

3.10、暴露端口

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
将ClusterIP换为NodePort,然后保存

3.10、查看服务情况

kubectl get service -n kubernetes-dashboard -o wide
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.102.175.21    <none>        8000/TCP        17h   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.102.129.248   <none>        443:31766/TCP   17h   k8s-app=kubernetes-dashboard
#这里可以找到端口31766</none></none>

kubectl get pod -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-566cddb686-vkxvx   1/1     Running   0          17h   192.168.201.133   master   <none>           <none>
kubernetes-dashboard-7b5bf5d559-m6xt7        1/1     Running   0          17h   192.168.201.132   master   <none>           <none>
#这里可以找到主机

3.11这样就可以通过地址直接访问master的服务了

https://MASTER_IP:31766

忽略全部HTTPS安全警告
采用Token登录

搭建Kubernetes环境01

1、硬件网络环境
本次采用了云主机,搭建K8S环境
云主机最低配置要求为2Core,4G内存

节点名称 内网地址
master 172.16.172.101
node01 172.16.172.102
node02 172.16.172.103

2、配置k8s仓库,全部三个节点

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

3、安装必要的软件,全部三个节点

sudo apt-get update
sudo apt-get install -y docker.io kubelet kubeadm kubectl

4、初始化master节点
4.1、可以预先拉取镜像

kubeadm config images pull

4.2、初始化kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.525105 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1zdpa5.vmcsacag4wj3a0gv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
--discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

4.3、初始化master节点

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、两个工作节点加入

sudo kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
>    --discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、在master查看节点情况

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   15m   v1.16.2
node01   NotReady   <none>   10m   v1.16.2
node02   NotReady   <none>   10m   v1.16.2

Windows禁用蓝牙设备自动唤醒

蓝牙设备自动唤醒功能有时候很有用,有时候很鸡肋。
有一次把笔记本休眠,放到包里,结果被蓝牙鼠标重新唤醒了Windows,导致整个电脑暴热,还好电量耗尽
于是就想禁用蓝牙唤醒:

# 查询设备
powercfg -a
powercfg /devicequery wake_armed
powercfg /devicequery wake_programmable
# 禁用设备
powercfg /devicedisablewake "HID-compliant mouse (003)"
# 启用设备自动唤醒
powercfg /deviceenableawake "HID-compliant mouse (003)"

Windows平台上Chrome支持Custom Protocol

在Windows平台上,也可以通过Custom Protocol实现Chrome打开EXE进程,调用CS程序

1、通过注册表,实现单一用户支持Custom Protocol

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol]
"URL Protocol"=""
@="MyProtocol"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\DefaultIcon]
@="FULL_PATH_TO_MYAPP\\MYAPP.exe.exe, 1"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open\command]
@="\"FULL_PATH_TO_MYAPP\\MYAPP.exe\" \"%1\""

2、通过注册表,让全体用户支持Custom Protocol,需要管理员权限

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\Software\Classes\MyProtocol]
"URL Protocol"=""
@="MyProtocol"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\DefaultIcon]
@="FULL_PATH_TO_MYAPP\\MYAPP.exe.exe, 1"

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open]

[HKEY_CURRENT_USER\SOFTWARE\Classes\MyProtocol\Shell\Open\command]
@="\"FULL_PATH_TO_MYAPP\\MYAPP.exe\" \"%1\""

Ubuntu18搭建CDH6环境03

1、确保cdt01可以ssh联通cdt02和cdt03

#这个userid与可以无密码使用sudo的userid相同
ssh -l userid cdh02
ssh -l userid cdh03

2、浏览器访问(以后都是界面了)
http://172.16.172.101:7180
用户名:admin
密码:admin

3、根据引导界面,新建Cluster
将172.16.172.101-172.16.172.103都安装好cloudera-manager-agent

4、根据引导界面,选用需要的软件进行安装
安装时,注意合理分配角色,也就是合理分配内存资源

5、依次安装
hdfs
zookeeper
hbase
yarn
hive
spark

6、安装完毕

PS:
1、如果出现找不到jdbc driver的情况

sudo apt-get install libmysql-java

Ubuntu18搭建CDH6环境02

1、cdt01安装

#添加cloudera仓库
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/archive.key
sudo apt-key add archive.key
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/cloudera-manager.list
sudo mv cloudera-manager.list /etc/apt/sources.list.d/

#更新软件清单
sudo apt-get update

#安装jdk8
sudo apt-get install openjdk-8-jdk

#安装cloudera
sudo apt-get install cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server

2、安装及配置mysql
2.1、安装mysql

sudo apt-get install mysql-server mysql-client libmysqlclient-dev libmysql-java

2.2、停止mysql

sudo service mysql stop

2.3、删除不需要的文件

sudo rm /var/lib/mysql/ib_logfile0
sudo rm /var/lib/mysql/ib_logfile1

2.4、修改配置文件

sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf

#修改或添加以下信息
[mysqld]
transaction-isolation = READ-COMMITTED
max_allowed_packet = 32M
max_connections = 300
innodb_flush_method = O_DIRECT

2.5、启动mysql

sudo service mysql start

2.6、初始化mysql

sudo mysql_secure_installation

3、创建数据库并授权

sudo mysql -uroot -p
-- 创建数据库
-- Cloudera Manager Server
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Activity Monitor
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Reports Manager
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hue
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hive Metastore Server
CREATE DATABASE hive DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Sentry Server
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Audit Server
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Metadata Server
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Oozie
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

#创建用户并授权
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'scm123456';
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'amon123456';
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'rman123456';
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'hue123456';
GRANT ALL ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive123456';
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry123456';
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'nav123456';
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'navms123456';
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'oozie123456';

4、初始化数据库

sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm scm123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql amon amon amon123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql rman rman rman123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hue hue hue123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hive hive hive123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql sentry sentry sentry123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql nav nav nav123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql navms navms navms123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql oozie oozie oozie123456

5、启动

#启动cloudera-scm-server
sudo systemctl start cloudera-scm-server

#查看启动日志,等待Jetty启动完成
sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

6、启动
浏览器访问
http://172.16.172.101:7180
用户名:admin
密码:admin

Ubuntu18搭建CDH6环境01

1、环境准备

VirtualBox 6
Ubuntu 18
Cloudera CDH 6.3

2、虚拟机安装Ubuntu18,配置为
1CPU
4G内存
300G硬盘
两块网卡,一块为HostOnly,一块为NAT

3、将虚拟机克隆为三份
如果是手工拷贝,记得修改硬盘UUID、虚拟机UUID、网卡硬件ID

4、设置IP地址、hostname及hosts文件

机器名 HostOnly IP
cdh01 172.16.172.101
cdh02 172.16.172.102
cdh03 172.16.172.103

5、允许无密码使用sudo,至少修改cdh02和cdh03

#edit /etc/sudoers
userid ALL=(ALL:ALL) NOPASSWD: ALL

使用JDK动态代理时为何必须实现至少一个接口

这个问题,就要去看一下OpenJDK的源码了:

//在Proxy类里中:
//constructorParams的定义如下:
private static final Class<?>[] constructorParams = { InvocationHandler.class };

//newProxyInstance无限精简之后就是:
public static Object newProxyInstance(ClassLoader loader, Class<?>[] interfaces, InvocationHandler h)
        throws IllegalArgumentException {
    //通过ProxyClassFactory调用ProxyGenerator生成了代理类
    Class<?> cl = getProxyClass0(loader, interfaces);
    //找到参数为InvocationHandler.class的构造函数
    final Constructor<?> cons = cl.getConstructor(constructorParams);
    //创建代理类实例
    return cons.newInstance(new Object[]{h});
}

//在ProxyGenerator类中:
public static byte[] generateProxyClass(final String name,Class<?>[] interfaces, int accessFlags)){}
private byte[] generateClassFile() {}

//上面两个方法,做的就是:
//将接口全部进行代理
//并生成其他需要的方法,比如上面用到的构造函数、toString、equals、hashCode等
//生成对应的字节码
//其实这也就说明了,为何JDK的动态代理,必须需要至少一个接口

Jetty源码分析01

一、Jetty的ScopedHandler的doStart方法,最后一步是将线程私有变量__outerScope设置成null,为什么需要这样做呢?

protected void doStart() throws Exception
{
    try{
        _outerScope=__outerScope.get();
        if (_outerScope==null){
           //本次处理的第一个scope handler
           //告知后续scope handler,_outerScope选我
            __outerScope.set(this);
        }
        super.doStart();
        _nextScope= getChildHandlerByClass(ScopedHandler.class);
    }
    finally
    {
        if (_outerScope==null){
           //本次处理结束
           //为了下次同一个线程处理是,
           //还能正常的设置第一个scope handler
           //必须把threadlocal变量设为null
            __outerScope.set(null);
        }
    }
}

二、Jetty中,ScopedHandler中nextHandle调用顺序是如何的?

//此外,这一节里有一个non scoped handler X,一开始没太看懂调阅顺序。
//后来发现是这样的:
public final void nextHandle(String target...)...
{
    if (_nextScope!=null && _nextScope==_handler){
        //上面条件可以保证下一个handler是scope handler
        _nextScope.doHandle(target,baseRequest,request, response);
    }
    else if (_handler!=null){
        //non scpoe handler调用下一个handler的
        super.handle(target,baseRequest,request,response);
    }
}

感觉类成员的命名不太合适,
比如__outerScope和_outerScope
比如_handler其实一直指向的是下一个handler,是不是该为_nextHandler更好一些?