This commit is contained in:
Torma Kristóf 2019-10-02 18:27:25 +02:00
parent d6dd233e83
commit 14cb4ae908
Signed by: tormakris
GPG Key ID: DC83C4F2C41B1047
45 changed files with 1333 additions and 0 deletions

76
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

21
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,21 @@
# Contributing
When contributing to this repository, please first discuss the change you wish to make via issue,
email, or any other method with the owners of this repository before making a change.
Please note we have a code of conduct, please follow it in all your interactions with the project.
## Pull Request Process
1. Ensure any install or build dependencies are removed before the end of the layer when doing a
build.
2. Update the README.md with details of changes to the interface, this includes new environment
variables, exposed ports, useful file locations and container parameters.
3. Increase the version numbers in any examples files and the README.md to the new version that this
Pull Request would represent.
4. You may merge the Pull Request in once you have the sign-off of two other developers, or if you
do not have permission to do that, you may request the second reviewer to merge it for you.
## Code of Conduct
Adhere to the code of conduct.

115
benchmark/benchmark.sh Normal file
View File

@ -0,0 +1,115 @@
#!/usr/bin/env bash
#Requirements:
#<function name without dashes>.wrk descriptor file for wrk
#<function name without dashes>.body (even if you don't need it)
#Configuration variables
functions=(isprime-scale isprime-scale-py isprime-scale-js hello-scale hello-scale-py hello-scale-js hello hello-js hello-py isprime isprime-js isprime-py)
connections=(1000)
times=(1m)
kuberhost="node1:30765"
maxthreads=40
#Wave mode configuration
wave_connection=40
wave_max_conn=160
wave_min_conn=40
wave_time="1m"
wave_loop_max=2
WRK_INSTALLED=$(command -v wrk)
if [[ $WRK_INSTALLED = "" ]]
then
apt update
apt install build-essential libssl-dev git -y
git clone https://github.com/wg/wrk.git wrk
cd wrk || exit
cores=$(cat /proc/cpuinfo | awk '/^processor/{print $3}' | wc -l)
make -j $((cores + 1))
cp wrk /usr/local/bin
fi
HEY_INSTALLED=$(command -v hey)
if [[ $HEY_INSTALLED = "" ]]
then
apt update
apt install -y golang
go get -u github.com/rakyll/hey
cp "$HOME"/go/bin/hey /usr/local/bin
fi
echo -e "Benchmarking functions\n"
for function in "${functions[@]}"
do
function_friendly=$(echo $function | cut - -d'-' -f1)
echo -e "Benchmarking $function\n"
echo -e "Output of $function is:\n"
perl -pi -e 'chomp if eof' "$function_friendly".body
curl --data-binary @"$function_friendly".body --header "Host: $function.kubeless" --header "Content-Type:application/json" http://$kuberhost/"$function"
echo -e "\n"
if [[ $* = *"--wave"* ]]
then
wave_loop=1
wave_dir_up=true
while [[ $wave_loop -lt $wave_loop_max ]]; do
now=$(date '+%Y-%m-%d-%H-%M')
echo -e "Connections: $wave_connection"
echo -e "Running"
hey -c $wave_connection -z $wave_time -m POST -o csv -host "$function.kubeless" -D "$function_friendly".body -T "application/json" http://$kuberhost/"$function" > ./data/"$function"."$wave_connection"."$now".wave.csv
if $wave_dir_up
then
if [[ $wave_connection -lt $wave_max_conn ]]
then
echo -e "Stepping up"
wave_connection=$((wave_connection * 5))
else
echo -e "Not stepping"
wave_dir_up=false
fi
else
if [[ $wave_connection -gt $wave_min_conn ]]
then
echo -e "Stepping down"
wave_connection=$((wave_connection / 5))
else
echo -e "Not stepping"
wave_dir_up=true
wave_loop=$((wave_loop + 1))
fi
fi
done
else
for connection in "${connections[@]}"
do
if [[ $connection -lt $((maxthreads + 1)) ]]
then
threads=$((connection-1))
else
threads=$maxthreads
fi
echo -e "Threads: $threads Connections $connection\n"
for time in "${times[@]}"
do
datetime=$(date '+%Y-%m-%d-%H-%M-%S')
echo -e "Time: $time\n"
if [[ $* = *"--wrk"* ]]
then
echo -e "wrk $datetime\n"
wrk -t$threads -c"$connection" -d"$time" -s"$function_friendly".wrk -H"Host: $function.kubeless" -H"Content-Type:application/json" --latency http://$kuberhost/"$function" > ./data/"$function"."$connection"."$time"."$datetime".wrk.txt 2>&1
fi
if [[ $* = *"--hey"* ]]
then
echo -e "hey-summary $datetime\n"
hey -c "$connection" -z "$time" -m POST -host "$function.kubeless" -D "$function_firendly".body -T "application/json" http://$kuberhost/"$function" > ./data/"$function"."$connection"."$time"."$datetime".hey.txt
fi
if [[ $* = *"--csv"* ]]
then
echo -e "hey-csv $datetime\n"
hey -c "$connection" -z "$time" -m POST -o csv -host "$function.kubeless" -D "$function_friendly".body -T "application/json" http://$kuberhost/"$function" > ./data/"$function"."$connection"."$time"."$datetime".csv
fi
echo -e "Finished at $datetime"
done
done
fi
done
python3 ./data/process.py > ./data/processed.txt

61
benchmark/data/process.py Normal file
View File

@ -0,0 +1,61 @@
#!/usr/bin/env python3
import csv
import os
from pprint import pprint
import numpy as np
import matplotlib.pyplot as plt
#Returns array of csv files in current directory
def getFiles():
files = [f for f in os.listdir('.') if os.path.isfile(f)]
return[ f for f in files if f.endswith('.csv') ]
def processFile(fname):
with open(fname,'r') as f:
lines=[]
data=csv.reader(f)
fields=next(data)
responseCodes={}
responsePerSec={}
responseTimes=[]
for row in data:
items=zip(fields,row)
item={}
for(name,value) in items:
item[name]=value.strip()
sec=int(item['offset'].split('.')[0])
if sec not in responsePerSec:
responsePerSec[sec]=[]
else:
responsePerSec[sec].append(item['response-time'])
code=item['status-code']
if code not in responseCodes:
responseCodes[code]=1
else:
responseCodes[code]=responseCodes[code]+1
responseTimes.append(item['response-time'])
if len(responseTimes)!=0:
maxResponse=max(responseTimes)
minResponse=min(responseTimes)
print("Maximum response time was ",maxResponse)
print("Minimum response time was ",minResponse)
else:
print("csv is empty")
pprint(responseCodes)
for sec in responsePerSec:
if len(responsePerSec[sec])!=0:
print(sec, ":")
print(" Maximum:", max(responsePerSec[sec]))
print(" Minimum:", min(responsePerSec[sec]))
print(" Num of responses:", len(responsePerSec[sec]))
else:
print(" empty")
def processAllFiles():
files=getFiles()
for f in files:
print("Processing ", f)
processFile(f)
if __name__ == "__main__":
processAllFiles()

View File

@ -0,0 +1,2 @@
numpy
matplotlib

1
benchmark/hello.body Normal file
View File

@ -0,0 +1 @@

1
benchmark/hello.wrk Normal file
View File

@ -0,0 +1 @@
wrk.method = "GET"

1
benchmark/isprime.body Normal file
View File

@ -0,0 +1 @@
107107

26
benchmark/isprime.wrk Normal file
View File

@ -0,0 +1,26 @@
wrk.method = "POST"
wrk.body = "107107"
done = function(summary, latency, requests)
-- open output file
f = io.open("result.csv", "a+")
-- write below results to file
-- minimum latency
-- max latency
-- mean of latency
-- standard deviation of latency
-- 50percentile latency
-- 90percentile latency
-- 99percentile latency
-- 99.999percentile latency
-- duration of the benchmark
-- total requests during the benchmark
-- total received bytes during the benchmark
f:write(string.format("%f,%f,%f,%f,%f,%f,%f,%f,%d,%d,%d\n",
latency.min, latency.max, latency.mean, latency.stdev, latency:percentile(50),
latency:percentile(90), latency:percentile(99), latency:percentile(99.999),
summary["duration"], summary["requests"], summary["bytes"]))
f:close()
end

24
benchmark/report.lua Normal file
View File

@ -0,0 +1,24 @@
done = function(summary, latency, requests)
-- open output file
f = io.open("result.csv", "a+")
-- write below results to file
-- minimum latency
-- max latency
-- mean of latency
-- standard deviation of latency
-- 50percentile latency
-- 90percentile latency
-- 99percentile latency
-- 99.999percentile latency
-- duration of the benchmark
-- total requests during the benchmark
-- total received bytes during the benchmark
f:write(string.format("%f,%f,%f,%f,%f,%f,%f,%f,%d,%d,%d\n",
latency.min, latency.max, latency.mean, latency.stdev, latency:percentile(50),
latency:percentile(90), latency:percentile(99), latency:percentile(99.999),
summary["duration"], summary["requests"], summary["bytes"]))
f:close()
end

176
cluster-deploy Normal file
View File

@ -0,0 +1,176 @@
#!/bin/bash
# @author: Daniel Keszei <keszei.daniel@gmail.com>
# @description: Kubernetes deployer
# @created: 2019-02-15
# @version: 1.0
# @origin: https://github.com/szefoka/openfaas_lab
# Variable(s)
# Script variable(s)
PID=$$
SCRIPTNAME="$(basename $0)"
WORKER_LIST="worker.list"
EXTERNAL=false
MASTER_IP=""
TOKEN=""
HASH=""
# Functions
function usage {
cat << EOF
Usage: $SCRIPTNAME [--external|-e] <CNI>
--external|-e : Initizalize Kubernetes on the external network
instead of on an internal one
Available <CNI> plugins:
* Calico
* Cilium
* Flannel
* WeaveNet
EOF
}
## Send error messages to stderr
function echo_err {
echo "Error: $@" >&2
}
function wait_for_worker {
while [[ "$(kubectl get nodes | grep Ready | grep none | wc -l)" -lt 1 ]];
do
sleep 1
done
}
function wait_for_podnetwork {
#podnetwork should be running on the master and at least one worker node
while [[ "$(kubectl get pods -n kube-system | grep weave-net | grep Running | wc -l)" -lt 2 ]];
do
sleep 1
done
}
# Preflight checks
## Check file from parameters
if [ ! -f $WORKER_LIST ]; then
echo_err "Worker list file ($WORKER_LIST) not exists."
exit 1
fi
## Check the file contents
if [ ! -s $WORKER_LIST ]; then
echo_err "Worker list file ($WORKER_LIST) is empty."
exit 1
fi
## Create array from file
readarray WORKER < $WORKER_LIST
## Check for argument
if [ "$#" -lt 1 ]; then
echo_err "Missing CNI plugin name as an argument."
exit 1
fi
## Check for help parameter
for i in "$@"
do
### Make the letters of the argument lowercase
i=$(tr '[:upper:]' '[:lower:]' <<< $i)
case $i in
### Print out help message
help|h|-h|--help) usage; exit 0;;
esac
done
## Check parameters and setup variables for Kubernetes installation
for i in "$@"
do
### Make the letters of the argument lowercase
i=$(tr '[:upper:]' '[:lower:]' <<< $i)
case $i in
### Kubernetes network usage (internal|external)
-e|--external) echo "# Kubernetes will be set up for external network. #";
EXTERNAL=false;;
### Set parameters for Calico
calico) echo "[CNI] Calico selected...";
CNI="calico";
POD_NETWORK="192.168.0.0/16";;
### Set parameters for Cilium
cilium) echo "[CNI] Cilium selected...";
CNI="cilium";
POD_NETWORK="";;
### Set parameters for Flannel
flannel) echo "[CNI] Flannel selected...";
CNI="flannel";
POD_NETWORK="10.244.0.0/16";;
### Set parameters for WeaveNet...
weavenet) echo "[CNI] WeaveNet selected...";
CNI="weavenet";
POD_NETWORK="";;
### Wrong argument, print error message
*) echo_err "Unkown parameter: $i option is not valid!";
exit 1;;
esac
done
## Get Master node IP address
if [ $EXTERNAL ]; then
MASTER_IP=$(grep -oP '(?<=src )[^ ]*' \
<(grep \
-f <(ls -l /sys/class/net | grep pci | awk '{print $9}') \
<(ip ro sh) |
grep -v $(ip ro sh | grep default | awk '{print $5}')) |
head -1)
if [ "x$MASTER_IP" == "x" ]; then
EXTERNAL=false
MASTER_IP=$(grep -oP '(?<=src )[^ ]*' <(ip ro sh | grep default))
fi
else
MASTER_IP=$(grep -oP '(?<=src )[^ ]*' <(ip ro sh | grep default))
fi
## Setup Kubernetes
./deploy/kubernetes_install.sh master $EXTERNAL $MASTER_IP $POD_NETWORK
## Install CNI Plugin
./deploy/${CNI}_setup.sh
TOKEN=$(kubeadm token list | tail -n 1 | cut -d ' ' -f 1)
HASH=sha256:$(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt |
openssl rsa -pubin -outform der 2>/dev/null |
openssl dgst -sha256 -hex |
sed 's/^.* //')
#FIXME Do I need local docker-registry?
#./deploy/docker_registry_setup.sh $IP:5000
# Join the worker nodes
for WORKERNAME in ${WORKER[@]}; do
echo "[worker:$WORKERNAME] Deploying..."
ssh $WORKERNAME -o "StrictHostKeyChecking no" \
"bash -s" < ./deploy/kubernetes_install.sh worker $EXTERNAL $MASTER_IP:6443 $TOKEN $HASH
#FIXME Do I need to wait for the worker?
# wait_for_worker
#FIXME Do I need local docker-registry?
# ssh $WORKERNAME -o "StrictHostKeyChecking no" "bash -s" < ./deploy/docker_registry_setup.sh $MASTER_IP:5000
echo "[worker:$WORKERNAME] Deployment is completed."
done
#Deploy Kubeless
./deploy/kubeless_setup.sh
#Deploy Metric Server
./deploy/metric_setup.sh

11
cluster-update Normal file
View File

@ -0,0 +1,11 @@
#!/bin/bash
WORKER_LIST="worker.list"
./update/update.sh
for LINE in $(cat $WORKER_LIST | grep -vE "^#"); do
WORKERNAME=`echo $LINE | awk -F"/" '{print $NF}'`
echo "[worker:$WORKERNAME] Updating..."
ssh $WORKERNAME -o "StrictHostKeyChecking no" "bash -s" < ./update/update.sh
echo "[worker:$WORKERNAME] Update is completed."
done

63
cluster-withdraw Normal file
View File

@ -0,0 +1,63 @@
#!/bin/bash
# @author: Daniel Keszei <keszei.daniel@gmail.com>
# @description: Kubernetes cluster withdrawer
# @created: 2019-02-26
# @version: 1.0
# Variable(s)
# Script variable(s)
PID=$$
SCRIPTNAME="$(basename $0)"
WORKER_LIST="worker.list"
# Functions
#FIXME Write usage message
function usage {
cat << EOF
EOF
}
## Send error messages to stderr
function echo_err {
echo "Error: $@" >&2
}
## Check file from parameters
if [ ! -f $WORKER_LIST ]; then
echo_err "Worker list file ($WORKER_LIST) not exists."
exit 1
fi
## Check the file contents
if [ ! -s $WORKER_LIST ]; then
echo_err "Worker list file ($WORKER_LIST) is empty."
exit 1
fi
## Create WORKER array from file
readarray WORKER < $WORKER_LIST
# Reset Master node
./withdraw/node_reset.sh
rm -rf ~/.kube
#FIXME Does local docker-registry needs removal
#./deploy/docker_registry_setup.sh $IP:5000
# Reset the workers0
for LINE in $(cat $WORKER_LIST | grep -vE "^#"); do
WORKERNAME=`echo $LINE | awk -F"/" '{print $NF}'`
echo "[worker:$WORKERNAME] Evicating..."
ssh $WORKERNAME -o "StrictHostKeyChecking no" "bash -s" < ./withdraw/node_reset.sh
#FIXME Does local docker-registry needs removal
# ssh $WORKERNAME -o "StrictHostKeyChecking no" "bash -s" < ./deploy/docker_registry_setup.sh $IP:5000
echo "[worker:$WORKERNAME] Eviction is completed."
done

3
delete_evicted.sh Normal file
View File

@ -0,0 +1,3 @@
#!/bin/bash
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -

5
deploy/calico_setup.sh Normal file
View File

@ -0,0 +1,5 @@
#!/bin/bash
## Apply Calico CNI plugin
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

4
deploy/cilium_setup.sh Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
## Apply Cilium CNI plugin
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium.yaml

View File

@ -0,0 +1,7 @@
#!/bin/bash
IP=$1
sed "/ExecStart/ s/$/ --insecure-registry=$IP/" /lib/systemd/system/docker.service > /lib/systemd/system/tmp
mv /lib/systemd/system/tmp /lib/systemd/system/docker.service
systemctl daemon-reload
systemctl restart docker.service
docker run -d -p 5000:5000 --restart=always --name registry registry:2

4
deploy/flannel_setup.sh Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
## Apply Flannel CNI plugin
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

8
deploy/gloo_setup.sh Normal file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env bash
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
curl -sL https://run.solo.io/gloo/install | sh
export PATH=$HOME/.gloo/bin:$PATH
glooctl install ingress

13
deploy/kafka_pv.yml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir
labels:
kubeless: kafka
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: "/root/paprika-k/"

View File

@ -0,0 +1,5 @@
#!/bin/bash
kubectl create -f kafka_pv.yml
kubectl create -f zoo_pv.yml
export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kafka-trigger/releases/latest | grep tag_name | cut -d '"' -f 4)
kubectl create -f https://github.com/kubeless/kafka-trigger/releases/download/$RELEASE/kafka-zookeeper-$RELEASE.yaml

17
deploy/kubeless_setup.sh Normal file
View File

@ -0,0 +1,17 @@
#!/bin/bash
RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4)
kubectl create ns kubeless
kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml
#kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-non-rbac-$RELEASE.yaml
apt install -y unzip
#kubeless command
OS=$(uname -s| tr '[:upper:]' '[:lower:]')
curl -OL https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless_$OS-amd64.zip && \
unzip kubeless_$OS-amd64.zip && \
sudo mv bundles/kubeless_$OS-amd64/kubeless /usr/local/bin/
#Ingress nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

View File

@ -0,0 +1,3 @@
#!/bin/bash
kubectl create -f https://raw.githubusercontent.com/kubeless/kubeless-ui/master/k8s.yaml

View File

@ -0,0 +1,23 @@
#!/bin/bash
#Gen certificates
mkdir -p certs
cd certs
CERT_DIR=$PWD
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com"
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
kubectl create secret generic kubernetes-dashboard-certs --from-file=$CERT_DIR -n kube-system
cd ..
#Deploy the dashboard
#wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
sed -i '176i\ type: LoadBalancer' kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml
#Token based dashboard authentication
kubectl create serviceaccount k8sadmin -n kube-system
kubectl create clusterrolebinding k8sadmin --clusterrole=cluster-admin --serviceaccount=kube-system:k8sadmin

View File

@ -0,0 +1,93 @@
#!/bin/bash
# Setting all parameters
NODE_TYPE=$1
INTERNAL=!$2
MASTER_IP=$3
## Parameters for master node installation
if [ "$NODE_TYPE" == "master" ]
then
if [ "$#" -lt 4 ]; then
POD_NETWORK_ARG=""
else
POD_NETWORK_ARG="--pod-network-cidr=$4"
fi
# Parameters for worker node installation
elif [ "$NODE_TYPE" == "worker" ]
then
TOKEN=$4
HASH=$5
fi
#Installing Docker
DOCKER_INSTALLED=$(which docker)
if [ "$DOCKER_INSTALLED" = "" ]
then
apt-get remove docker docker-engine docker.io
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce
fi
#Installing Kubernetes
KUBERNETES_INSTALLED=$(which kubeadm)
if [ "$KUBERNETES_INSTALLED" = "" ]
then
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
touch /etc/apt/sources.list.d/kubernetes.list
chmod 666 /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
fi
#Disabling swap for Kubernetes
sysctl net.bridge.bridge-nf-call-iptables=1 > /dev/null
swapoff -a
# Initialize Kubernetes as Master node
if [ "$NODE_TYPE" == "master" ]
then
## Set master node for internal network
if [ $INTERNAL ]; then
touch /etc/default/kubelet
echo "KUBELET_EXTRA_ARGS=--node-ip=$MASTER_IP" > /etc/default/kubelet
fi
## Init Kubernetes
kubeadm init --ignore-preflight-errors=SystemVerification \
--apiserver-advertise-address=$MASTER_IP $POD_NETWORK_ARG
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo "[master:$(hostname -s)] Node is up and running on $MASTER_IP"
# Initialize Kubernetes as Worker node
elif [ "$NODE_TYPE" = "worker" ]
then
## Set worker node for internal network
if [ $INTERNAL ]; then
IP=$(grep -oP \
'(?<=src )[^ ]*' \
<(grep -f <(ls -l /sys/class/net | grep pci | awk '{print $9}') \
<(ip ro sh) |
grep -v $(ip ro sh | grep default | awk '{print $5}')) |
head -1)
touch /etc/default/kubelet
echo "KUBELET_EXTRA_ARGS=--node-ip=$IP" > /etc/default/kubelet
else
IP=$(grep -oP '(?<=src )[^ ]*' <(ip ro sh | grep default))
fi
## Join to Kubernetes Master node
kubeadm join $MASTER_IP --token $TOKEN --discovery-token-ca-cert-hash $HASH \
--ignore-preflight-errors=SystemVerification
echo "[worker:$(hostname -s)] Client ($IP) joined to Master ($MASTER_IP)"
else
echo "Invalid argument"
fi

5
deploy/metric_setup.sh Normal file
View File

@ -0,0 +1,5 @@
git clone https://github.com/kubernetes-incubator/metrics-server.git
sed -i '34i\ command:\' metrics-server/deploy/1.8+/metrics-server-deployment.yaml
sed -i '35i\ - /metrics-server\' metrics-server/deploy/1.8+/metrics-server-deployment.yaml
sed -i '36i\ - --kubelet-insecure-tls\' metrics-server/deploy/1.8+/metrics-server-deployment.yaml
kubectl create -f metrics-server/deploy/1.8+/

4
deploy/weavenet_setup.sh Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
## Apply WeaveNet CNI plugin
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

13
deploy/zoo_pv.yml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper
labels:
kubeless: zookeeper
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: "/root/paprika-z/"

5
functions/Gopkg.toml Normal file
View File

@ -0,0 +1,5 @@
ignored = ["github.com/kubeless/kubeless/pkg/functions"]
[[constraint]]
name = "github.com/sirupsen/logrus"
branch = "master"

View File

@ -0,0 +1,11 @@
#!/bin/bash
#$1=runtime
#$2=filename
#$3=function name
#$4=handler
kubeless function deploy $3 --runtime $1 --from-file $2 --handler $4
kubeless trigger http create $3 --function-name $3 --path $3 --hostname $3.kubeless
#kubeless autoscale create $3 --max 32 --metric "cpu" --min 3 --value "50"
#Test with curl --data '{"Another": "Echo"}' --header "Host: get-python.192.168.99.100.nip.io" --header "Content-Type:application/json" 192.168.99.100/echo

View File

@ -0,0 +1,11 @@
#!/bin/bash
#$1=runtime
#$2=filename
#$3=function name
#$4=handler
kubeless function deploy $3 --runtime $1 --from-file $2 --handler $4
kubeless trigger kafka create $3 --function-selector created-by=kubeless,function=$3 --trigger-topic "$3-topic"
#Test from within cluster
#kubeless topic publish --topic "$3-topic" --data "Hello World!"

View File

@ -0,0 +1,42 @@
#!/bin/bash
#TODO python3 kornyezetet hozzaadni es szukseg szerint mas kornyezeteket
# a function mappaban levo fajlokra meghivja a deploy_function.sh scriptet ugy,
# hogy a fuggveny neve a fajl neve kiterjesztes nelkul, es a handle neve a fajlban levo fuggveny neve
#TODO lehetne majd irni hozzairni hogy ha tobb func van egy fajlban akkor egy alapertelmezetett ad meg handle-kent
for x in *; do
if [ $x = 'deploy_function.sh' ]; then
continue
fi
if [ $x = 'get_functions.sh' ]; then
continue
fi
echo "Deploying $x"
ispython=$(echo $x | grep .py)
#ispython3=$(cat $x | grep python3)
isgolang=$(echo $x | grep .go)
if [ ! $ispython = "" ]; then
handle=$( cat $x | grep def | sed 's/def \(.*\)(.*/\1/' )
funcname=$( echo $x | sed 's/\(.*\)\.py/\1/')
sh deploy_function.sh python2.7 $x $funcname $handle
echo "file name: $x"
echo "function name: $funcname"
echo "handle name: $handle"
elif [ ! $isgolang = "" ]; then
echo "goo handle elott: $x"
handle=$( cat $x | grep 'func ' | sed 's/func \(.*\)(.*(.*/\1/' )
funcname=$( echo $x | sed 's/\(.*\)\.go/\1/')
sh deploy_function.sh go1.10 $x $funcname $handle
echo "file name: $x"
echo "function name: $funcname"
echo "handle name: $handle"
fi
done

10
functions/helloget.go Normal file
View File

@ -0,0 +1,10 @@
package kubeless
import (
"github.com/kubeless/kubeless/pkg/functions"
)
// Foo sample function
func Foo(event functions.Event, context functions.Context) (string, error) {
return "Hello world!", nil
}

5
functions/helloget.js Normal file
View File

@ -0,0 +1,5 @@
module.exports = {
foo: function (event, context) {
return 'hello world!';
}
}

2
functions/helloget.py Normal file
View File

@ -0,0 +1,2 @@
def foo(event, context):
return "hello world"

27
functions/isprime.go Normal file
View File

@ -0,0 +1,27 @@
package kubeless
import (
"fmt"
"math"
"strconv"
"github.com/kubeless/kubeless/pkg/functions"
"github.com/sirupsen/logrus"
)
func IsPrime(event functions.Event, context functions.Context) (string, error) {
num, err := strconv.Atoi(event.Data)
if err != nil {
return "", fmt.Errorf("Failed to parse %s as int! %v", event.Data, err)
}
logrus.Infof("Checking if %s is prime", event.Data)
if num <= 1 {
return fmt.Sprintf("%d is not prime", num), nil
}
for i := 2; i <= int(math.Floor(float64(num)/2)); i++ {
if num%i == 0 {
return fmt.Sprintf("%d is not prime", num), nil
}
}
return fmt.Sprintf("%d is prime", num), nil
}

26
functions/isprime.js Normal file
View File

@ -0,0 +1,26 @@
module.exports = {
handler: (event, context) => {
num=event.data;
if (num == 1) return "Not Prime";
num += 2;
var upper = Math.sqrt(num);
var sieve = new Array(num)
.join(',').split(',') // get values for map to work
.map(function(){ return "Prime" });
for (var i = 2; i <= num; i++) {
if (sieve[i]) {
for (var j = i * i; j < num; j += i) {
sieve[j] = false;
};
};
};
if (sieve[num-2]) {
return "Prime";
};
else {
return "Not Prime";
};
},
};

13
functions/isprime.py Normal file
View File

@ -0,0 +1,13 @@
def isprime(event,context):
n= event['data']
if n == 2 or n == 3: return "Prime"
if n < 2 or n%2 == 0: return "Not Prime"
if n < 9: return "Prime"
if n%3 == 0: return "Not Prime"
r = int(n**0.5)
f = 5
while f <= r:
if n%f == 0: return "Not Prime"
if n%(f+2) == 0: return "Not Prime"
f +=6
return "Prime"

70
functions/matrix.go Normal file
View File

@ -0,0 +1,70 @@
package kubeless
import (
"fmt"
"github.com/kubeless/kubeless/pkg/functions"
)
func main() {
//Defining 2D matrices
m1 := [3][3]int{
[3]int{1, 1, 1},
[3]int{1, 1, 1},
[3]int{1, 1, 1},
}
m2 := [3][3]int{
[3]int{1, 1, 1},
[3]int{1, 1, 1},
[3]int{1, 1, 1},
}
//Declaring a matrix variable for holding the multiplication results
var m3 [3][3]int
for i := 0; i < 3; i++ {
for j := 0; j < 3; j++ {
m3[i][j] = 0
for k := 0; k < 3; k++ {
m3[i][j] = m3[i][j] + (m1[i][k] * m2[k][j])
}
}
}
twoDimensionalMatrices := [3][3][3]int{m1, m2, m3}
matrixNames := []string{"MATRIX1", "MATRIX2", "MATRIX3 = MATRIX1*MATRIX2"}
for index, m := range twoDimensionalMatrices {
fmt.Println(matrixNames[index],":")
showMatrixElements(m)
fmt.Println()
}
}
//A function that displays matix elements
func showMatrixElements(m [3][3]int) {
for i := 0; i < 3; i++ {
for j := 0; j < 3; j++ {
fmt.Printf("%d\t", m[i][j])
}
fmt.Println()
}
}
/*
MATRIX1 1 :
1 1 1
1 1 1
1 1 1
MATRIX2 2 :
1 1 1
1 1 1
1 1 1
MATRIX3 = MATRIX1*MATRIX2 3 :
3 3 3
3 3 3
3 3 3
*/

View File

@ -0,0 +1,242 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumeClaimTemplates:
- metadata:
name: zookeeper
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: bitnami/kafka-trigger-controller:v1.0.1
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30092
selector:
kubeless: kafka

54
kafka-testing/log Normal file
View File

@ -0,0 +1,54 @@
possible solution: ssh into kafka pod and check out 10broker-config.yml
broker nodeport not well implemented
node1:
change broker service to NodePort
reinstall kafka
add kafka-0 ip + kafka-0.broker.kubeless.svc.cluster.local to hosts -> not helping
node4:
install kafkacat
sudo apt-get install kafkacat
command:
echo 'Hello World!' | kafkacat -P -b node1:30092 -t test-topic
reply:
% ERROR: Local: Host resolution failure: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Failed to resolve 'kafka-0.broker.kubeless.svc.cluster.local:9092': Name or service not known
% Delivery failed for message: Local: Message timed out
command:
kafkacat -L -b node1:30092 -t test-topic
rep:
Metadata for test-topic (from broker -1: node1:30092/bootstrap):
1 brokers:
broker 1001 at kafka-0.broker.kubeless.svc.cluster.local:9092
1 topics:
topic "test-topic" with 1 partitions:
partition 0, leader 1001, replicas: 1001, isrs: 1001
command: echo 'Hello World!' | kafkacat -d broker -P -b node1:30092 -t test-topic
%7|1554849553.120|BRKMAIN|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Enter main broker thread
%7|1554849553.120|STATE|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Broker changed state INIT -> UP
%7|1554849553.120|BROKER|rdkafka#producer-1| [thrd:app]: node1:30092/bootstrap: Added new broker with NodeId -1
%7|1554849553.120|BRKMAIN|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Enter main broker thread
%7|1554849553.120|CONNECT|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: broker in state INIT connecting
%7|1554849553.121|CONNECT|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Connecting to ipv4#10.10.1.1:30092 (plaintext) with socket 7
%7|1554849553.121|STATE|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Broker changed state INIT -> CONNECT
%7|1554849553.121|CONNECT|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Connected to ipv4#10.10.1.1:30092
%7|1554849553.121|CONNECTED|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Connected (#1)
%7|1554849553.121|FEATURE|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Updated enabled protocol features +ApiVersion to ApiVersion
%7|1554849553.121|STATE|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Broker changed state CONNECT -> APIVERSION_QUERY
%7|1554849553.122|FEATURE|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Updated enabled protocol features to MsgVer1,ApiVersion,BrokerBalancedConsumer,ThrottleTime,Sasl,SaslHandshake,BrokerGroupCoordinator,LZ4,OffsetTime,MsgVer2
%7|1554849553.122|STATE|rdkafka#producer-1| [thrd:node1:30092/bootstrap]: node1:30092/bootstrap: Broker changed state APIVERSION_QUERY -> UP
%7|1554849553.122|BROKER|rdkafka#producer-1| [thrd:main]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Added new broker with NodeId 1001
%7|1554849553.122|CLUSTERID|rdkafka#producer-1| [thrd:main]: node1:30092/bootstrap: ClusterId update "" -> "MtPhSSSqQaCeGu-7DPmVVw"
%7|1554849553.122|BRKMAIN|rdkafka#producer-1| [thrd:kafka-0.broker.kubeless.svc.cluster.local:9092/1001]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Enter main broker thread
%7|1554849553.123|CONNECT|rdkafka#producer-1| [thrd:kafka-0.broker.kubeless.svc.cluster.local:9092/1001]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: broker in state INIT connecting
%7|1554849553.123|BROKERFAIL|rdkafka#producer-1| [thrd:kafka-0.broker.kubeless.svc.cluster.local:9092/1001]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: failed: err: Local: Host resolution failure: (errno: Bad address)
%7|1554849553.123|STATE|rdkafka#producer-1| [thrd:kafka-0.broker.kubeless.svc.cluster.local:9092/1001]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Broker changed state INIT -> DOWN
%7|1554849553.123|TOPBRK|rdkafka#producer-1| [thrd:kafka-0.broker.kubeless.svc.cluster.local:9092/1001]: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Topic test-topic [0]: joining broker (rktp 0x7f80ec001860)
% ERROR: Local: Host resolution failure: kafka-0.broker.kubeless.svc.cluster.local:9092/1001: Failed to resolve 'kafka-0.broker.kubeless.svc.cluster.local:9092': Name or service not known

16
package-latest Normal file
View File

@ -0,0 +1,16 @@
#!/bin/bash
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-cache madison docker-ce | cut -d'|' -f2 | tr -d ' ' | head -n 1 > docker.version
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
touch /etc/apt/sources.list.d/kubernetes.list
chmod 666 /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-cache madison kubelet | cut -d'|' -f2 | tr -d ' ' | head -n 1 > kubernetes.version

4
update/update.sh Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
apt update
apt upgrade -y
apt autoremove

7
withdraw/node_reset.sh Normal file
View File

@ -0,0 +1,7 @@
#!/bin/bash
kubeadm reset --force
test -f /etc/default/kubelet && rm /etc/default/kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && \
iptables -X && iptables -t nat -X && iptables -t mangle -X
systemctl restart docker

3
worker.list.example Normal file
View File

@ -0,0 +1,3 @@
node2
node3
node4