close

before you do anything below, enusre:

1. follow all steps in the blog (install kops)

2. you must install:  awscli, valid IAM user, s3 bucket, route 53 domain name setup. 

 

1. download and install kubectl

Linux: 

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

for more information: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

After you finish the download, type below commands

[vagrant@localhost ~]$ sudo mv kubectl /usr/local/bin/
[vagrant@localhost ~]$ chmod +x /usr/local/bin/kubectl

You can try kubectl to see the outputs.

[vagrant@localhost ~]$ kubectl

2. we are going to manage kubernete cluster and need to create ssh key by typing

[vagrant@localhost ~]$ ssh-keygen -t rsa -b 4096
then "return/enter" 3 times til end.

3. using kops to create cluster

[vagrant@localhost ~]$ kops create cluster --name=kubernetes.wlin.space --state=s3://kops-state.wlin.space --zones=ap-northeast-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=kubernetes.wlin.space
I0309 06:50:36.364158   14189 create_cluster.go:439] Inferred --cloud=aws from zone "ap-northeast-1a"
I0309 06:50:36.364324   14189 create_cluster.go:971] Using SSH public key: /home/vagrant/.ssh/id_rsa.pub
I0309 06:50:37.538381   14189 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-northeast-1a
Previewing changes that will be made:

I0309 06:50:41.183697   14189 executor.go:91] Tasks: 0 done / 73 total; 31 can run
I0309 06:50:42.102088   14189 executor.go:91] Tasks: 31 done / 73 total; 24 can run
I0309 06:50:43.265453   14189 executor.go:91] Tasks: 55 done / 73 total; 16 can run
I0309 06:50:43.581644   14189 executor.go:91] Tasks: 71 done / 73 total; 2 can run
I0309 06:50:43.662277   14189 executor.go:91] Tasks: 73 done / 73 total; 0 can run
Will create resources:

too many informaiton here ---ignore---

 

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster kubernetes.wlin.space
 * edit your node instance group: kops edit ig --name=kubernetes.wlin.space nodes
 * edit your master instance group: kops edit ig --name=kubernetes.wlin.space master-ap-northeast-1a

Finally configure your cluster with: kops update cluster kubernetes.wlin.space --yes

3.1 if you have typos and cause errors, you can either use kops update cluster or kops delete cluster with --yes.

sample:  $kops delete cluster --name kubernetes.wlin.space --state s3://kops-state.wlin.space --yes

3.2 if you want to check your configuration, you can type:

$kops edit cluster kubernetes.wlin.space

4. After you confirm the settings, start deploy by typing:

[vagrant@localhost ~]$ kops update cluster kubernetes.wlin.space --yes --state=s3://kops-state.wlin.space
I0309 06:57:59.631157   14196 executor.go:91] Tasks: 0 done / 73 total; 31 can run
I0309 06:58:00.706318   14196 vfs_castore.go:435] Issuing new certificate: "ca"
I0309 06:58:00.833775   14196 vfs_castore.go:435] Issuing new certificate: "apiserver-aggregator-ca"
I0309 06:58:01.521000   14196 executor.go:91] Tasks: 31 done / 73 total; 24 can run
I0309 06:58:02.779146   14196 vfs_castore.go:435] Issuing new certificate: "apiserver-aggregator"
I0309 06:58:03.124874   14196 vfs_castore.go:435] Issuing new certificate: "kubelet"
I0309 06:58:03.851946   14196 vfs_castore.go:435] Issuing new certificate: "kops"
I0309 06:58:03.911082   14196 vfs_castore.go:435] Issuing new certificate: "kube-controller-manager"
I0309 06:58:04.248209   14196 vfs_castore.go:435] Issuing new certificate: "master"
I0309 06:58:04.286990   14196 vfs_castore.go:435] Issuing new certificate: "kube-proxy"
I0309 06:58:04.981420   14196 vfs_castore.go:435] Issuing new certificate: "kubecfg"
I0309 06:58:05.095600   14196 vfs_castore.go:435] Issuing new certificate: "kube-scheduler"
I0309 06:58:05.150853   14196 vfs_castore.go:435] Issuing new certificate: "kubelet-api"
I0309 06:58:05.362275   14196 vfs_castore.go:435] Issuing new certificate: "apiserver-proxy-client"
I0309 06:58:05.980156   14196 executor.go:91] Tasks: 55 done / 73 total; 16 can run
I0309 06:58:07.499027   14196 launchconfiguration.go:333] waiting for IAM instance profile "nodes.kubernetes.wlin.space" to be ready
I0309 06:58:07.505084   14196 launchconfiguration.go:333] waiting for IAM instance profile "masters.kubernetes.wlin.space" to be ready
I0309 06:58:18.144157   14196 executor.go:91] Tasks: 71 done / 73 total; 2 can run
I0309 06:58:18.762376   14196 executor.go:91] Tasks: 73 done / 73 total; 0 can run
I0309 06:58:18.762427   14196 dns.go:153] Pre-creating DNS records
I0309 06:58:20.702808   14196 update_cluster.go:248] Exporting kubecfg for cluster
kops has set your kubectl context to kubernetes.wlin.space

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.kubernetes.wlin.space
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md

5. it takes 2-3 minutes to complete the task, then you can type the below comand to check the status.

[vagrant@localhost ~]$ kubectl get node
NAME                                               STATUS    ROLES     AGE       VERSION
ip-172-20-36-146.ap-northeast-1.compute.internal   Ready     master    2m        v1.8.6
ip-172-20-36-153.ap-northeast-1.compute.internal   Ready     node      28s       v1.8.6
ip-172-20-36-186.ap-northeast-1.compute.internal   Ready     node      48s       v1.8.6

6. try to run something on the cluster. 

[vagrant@localhost ~]$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
deployment "hello-minikube" created
[vagrant@localhost ~]$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
[vagrant@localhost ~]$ kubectl get service
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort    100.68.204.147   <none>        8080:31004/TCP   26s
kubernetes       ClusterIP   100.64.0.1       <none>        443/TCP          11m

7.  port 31004 is the port I use for hello-minikube.  I have to modify the AWS VPC security group of the master node to allow traffic.

8.  test the connection by curl or chrome.

curl http://api.kubernetes.wlin.space:31004/test
CLIENT VALUES:
client_address=172.20.36.146
command=GET
real path=/test
query=nil
request_version=1.1
request_uri=http://api.kubernetes.wlin.space:8080/test

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=api.kubernetes.wlin.space:31004
user-agent=curl/7.54.0
BODY:

 

9. if you don't want to keep the machine running, you can delete the cluster by typing:

$kops delete cluster kubernetes.wlin.space --state=s3://kops-state.wlin.space

$kops delete cluster kubernetes.wlin.space --state=s3://kops-state.wlin.space --yes

......

.....

Deleted kubectl config for kubernetes.wlin.space

Deleted cluster: "kubernetes.wlin.space"

Check the ec2 status from AWS management console.

 

 

arrow
arrow

    webbhlin 發表在 痞客邦 留言(0) 人氣()