go-go mod init 初始化报错

新建golang项目,go mod init 报错

go: cannot determine module path for source directory /home/xxx/go/src/baremachine-api (outside GOPATH, module path must be specified)

解决方案:
这是因为go mod init 初始化项目时,需要定义一个 module ,我们打开一个 go.mod 文件,就会发现第一行就有

module ProjectName

因此,在执行 go mod init 时需要定义 module,如:

1
go mod init ProjectName

go-安装ceph/redis/memcache/mysql等常用模块

基于centos 7 x86_64 ,go14 , 安装go-ceph/mysql/redis/memcached等模块

  1. 安装go-ceph

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    #vim /etc/yum.repo.d/ceph.repo
    [ceph]
    name=Ceph noarch packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    [ceph-deploy]
    name=Ceph noarch packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc

    #yum -y install libcephfs-devel librbd-devel librados-devel
    #go get github.com/ceph/go-ceph
    #go get github.com/ceph/go-ceph/rados
  2. 安装redis

    1
    #go get github.com/gomodule/redigo/redis
  1. 安装mysql orm

    1
    #go get github.com/jinzhu/gorm
  1. 安装memcached

    1
    #go get github.com/bradfitz/gomemcache/memcache
  1. 安装openstack sdk gophercloud

    1
    #go get github.com/gophercloud/gophercloud
  2. 安装 配置解析模块 viper

    1
    2
    #export GO111MODULE=on
    #go get "github.com/spf13/viper"

heat-使用go语言编写heat多主机编排demo

需要使用heat测试一下编排,使用go语言测试一下创建stack,

基于openstack stein

目的:传入虚机名称,安全组,网络,套餐创建出两台虚机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
package main

import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"time"
)

var (
//全局默认参数
VolumeType = "sata"
AvailableZone = "nova"
TemplateVersion = "2018-08-31"
)
type PostBody struct {
Files interface{} `json:"files"`
DisableRollback bool `json:"disable_rollback"`
Parameters interface{} `json:"parameters"`
StackName string `json:"stack_name"`
Template Template `json:"template"`
Environment interface{} `json:"environment"`
}
type Template struct {
HeatTemplateVersion string `json:"heat_template_version"`
Description string `json:"description"`
Parameters interface{} `json:"parameters"`
Resources interface{} `json:"resources"`
}
type VmBody struct {
Name string `json:"name"`
ImageId string `json:"image_id"`
NetworkId string `json:"network_id"`
Flavor string `json:"flavor"`
DiskSize string `json:"disk_size"`
SecurityGroupId string `json:"security_group_id"`
}
//Resource相关
type ResourceServer struct {
Type string `json:"type"`
Properties ServerProperty `json:"properties"`
}
type ServerProperty struct {
Name string `json:"name"`
Image string `json:"image"`
Flavor string `json:"flavor"`
Networks []ServerNetworkProperty `json:"networks"`
SecurityGroups []string `json:"security_groups"`
}
type ServerNetworkProperty struct {
Network string `json:"network"`
}
type ResourceVolume struct {
Type string `json:"type"`
Properties VolumeProperty `json:"properties"`
}
type VolumeProperty struct {
Size string `json:"size"`
VolumeType string `json:"volume_type"`
AvailabilityZone string `json:"availability_zone"`
Description string `json:"description"`
}
type ResourceVolumeAttach struct {
Type string `json:"type"`
Properties VolumeAttachProperties `json:"properties"`
}
type VolumeAttachProperties struct {
VolumeId PropertyInfo `json:"volume_id"`
InstanceUuid PropertyInfo `json:"instance_uuid"`
}
type PropertyInfo struct{
GetResource string `json:"get_resource"`
}
func CreateStack(stackName string,){

}
//测试 heat 创建
func main() {
//前端传来 [{"name":"server1","flavor":"ecs.small","image_id":"xxxxxxx","network_id":"xxxxx"},
//{"name":"server2","flavor":"ecs.large","image_id":"xxxx","network_id":"xxxxx"}]

var ServerList []VmBody
ServerList = append(ServerList, VmBody{
Name: "server1",
Flavor: "ecs.small",
ImageId: "67e7da07-e9d6-4f24-840e-205259d27913",
NetworkId: "6582c415-3578-4dd4-ac45-8920a4194462",
DiskSize: "50",
SecurityGroupId: "c7ff8d7e-de91-41cc-90e6-76d8d51d2688",

})
ServerList = append(ServerList, VmBody{
Name: "server2",
Flavor: "ecs.large",
ImageId: "84ae6cba-7e00-4f6c-907f-3e6832e6d825",
NetworkId: "6582c415-3578-4dd4-ac45-8920a4194462",
DiskSize: "100",
SecurityGroupId: "c7ff8d7e-de91-41cc-90e6-76d8d51d2688",
})
stackName := "zetao-test2"
regionName := "RegionOne"
//这里是以cinder云盘为例,本地盘请不要传入volume相关即可~
var resourcesServerMap map[string]interface{}
var heatTemplate Template
var postBody PostBody
resourcesServerMap = make(map[string]interface{})

for _, s := range ServerList {
var serverNetworks []ServerNetworkProperty
var tmpServerNetwork ServerNetworkProperty
var securityGroups []string
tmpServerNetwork=ServerNetworkProperty{
Network: s.NetworkId,
}
serverNetworks=append(serverNetworks,tmpServerNetwork)
securityGroups=append(securityGroups,s.SecurityGroupId)
resourcesServerMap[s.Name] = ResourceServer{
Type: "OS::Nova::Server",
Properties: ServerProperty{
Name: s.Name,
Flavor: s.Flavor,
Image: s.ImageId,
Networks: serverNetworks,
SecurityGroups: securityGroups,
},
}
serverVolumeName := fmt.Sprintf("%sVolume", s.Name)
resourcesServerMap[serverVolumeName] = ResourceVolume{
Type: "OS::Cinder::Volume",
Properties: VolumeProperty{
Size: s.DiskSize,
VolumeType: VolumeType,
AvailabilityZone: AvailableZone,
Description: serverVolumeName,
},
}
serverVolumeAttachName := fmt.Sprintf("%sVolume_attachment", s.Name)
//主机volume attach info
resourcesServerMap[serverVolumeAttachName] = ResourceVolumeAttach{
Type: "OS::Cinder::VolumeAttachment",
Properties: VolumeAttachProperties{
VolumeId: PropertyInfo{
GetResource: serverVolumeName,
},
InstanceUuid: PropertyInfo{
GetResource: s.Name,
},
},
}
}


heatTemplate = Template{
HeatTemplateVersion: TemplateVersion,
Description: "Heat Template",
Parameters: struct {
}{},
Resources: resourcesServerMap,
}
postBody = PostBody{
Files: struct {

}{},
DisableRollback: true,
Parameters: struct {

}{},
StackName: stackName,
Template: heatTemplate,
Environment: struct {
}{},
}

b, _ := json.Marshal(postBody)
fmt.Println(string(b))
//发起post请求
OpenstackToken := "gAAAAABgN1v2CCL09LAS3aokDkccXuLAn4RZx60z-vLdzDkXyw5oX6HKsXmoKWBaKOY27RcRr4SG1igRC68nHJWtwfGUbs4kMH2ultNCxAKgQz3KZEIXJ6L2gc832L0g_ngyzm93bovWwl8iMqWV7oZzBtGqsB8nIhVvSZqq2M6BzfWBc1cV980"
postUrl := "http://10.19.114.193:8004/v1/319bb16c497f422591f0688b2aec3f76/stacks"
req, err := http.NewRequest("POST",postUrl, bytes.NewBuffer(b))
//req.Header.Set("User-Agent","python-heatclient")
req.Header.Set("Accept","application/json")
req.Header.Set("Content-Type","application/json")
req.Header.Set("X-Auth-Token",OpenstackToken)
req.Header.Set("X-Region-Name",regionName)

if err != nil {
fmt.Println(err)
return
}
// Set client timeout
client := &http.Client{Timeout: time.Second * time.Duration(5)}
// Send request
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error reading response.", err)
return
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
//对body序列化
fmt.Println(string(body))

}

openstack-heat编排模板使用测试

基于 openstack stein版本heat,测试编排,测试通过

Heat 编排

Heat 目前支持两种格式的模板,一种是基于 JSON 格式的 CFN 模板;另外一种是基于 YAML 格式的 HOT 模板。CFN 模板主要是为了保持对 AWS 的兼容性。HOT 模板是 Heat 自有的,资源类型更加丰富,更能体现出 Heat 特点的模板。

一个典型的 HOT 模板由下列元素构成:

  • 模板版本:必填字段,指定所对应的模板版本,Heat 会根据版本进行检验。
  • 参数列表:选填,指输入参数列表。
  • 资源列表:必填,指生成的 Stack 所包含的各种资源。可以定义资源间的依赖关系,比如说生成 Port,然后再用 port 来生成 VM。
  • 输出列表:选填,指生成的 Stack 暴露出来的信息,可以用来给用户使用,也可以用来作为输入提供给其它的 Stack。
需求1 - 云盘单台编排

1.创建一个虚拟机,要求虚拟机自定义镜像、flavor、network、可用域和安全组;
2.创建一个数据卷,要求数据卷自定义存储类型、可用域和块容量;
3.将数据卷挂载给虚拟机;
4.要求所有资源通过参数传入;
5.要求返回属性包含虚拟机uuid、ip地址和vnc地址;

这个需求很简单,我们只需要查下Heat支持的模板类型,很快就可以找到只需要三个类型就可以满足需求

  • type: OS::Nova::Server
  • type: OS::Cinder::Volume
  • type: OS::Cinder::Volume

模板如下:

CREATE_VM_VOLUME_AND_ATTATCH.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
heat_template_version: 2013-05-23

description: Template for Create VM

parameters:
image_id:
type: string
description: Image ID or image name to use for the server
constraints:
- custom_constraint: glance.image

secgroup_id:
type: string
description : Id of the security groupe

network_id:
type: string
description: network id

flavor_id:
type: string
description: Flavor for the server to be created
constraints:
- custom_constraint: nova.flavor

server_az:
type: string
description: Availablity zone of the VM
default: nova

volume_size:
type: number
description: Size of volume to attach to VM
default: 1
constraints:
- range: { min: 1, max: 200 }

volume_type:
type: string
description: If specified, the type of volume to use, mapping to a specific backend.
default: sata
volume_az:
type: string
default: nova
server_name:
type: string
description: VM Name
default: heat-stack-default

resources:
server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
availability_zone: { get_param: server_az }
image: { get_param: image_id }
flavor: { get_param: flavor_id }
networks:
- network: { get_param: network_id }
security_groups:
- { get_param: secgroup_id }

volume:
type: OS::Cinder::Volume
properties:
size: { get_param: volume_size }
volume_type: { get_param: volume_type }
availability_zone: { get_param: volume_az }
description: Volume for stack

volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: volume }
instance_uuid: { get_resource: server }

outputs:
server_id:
value: { get_resource: server }
server_ip:
description: Network IP address of server
value: { get_attr: [ server, first_address ] }
novnc_console_url:
value: { get_attr: [server, console_urls, novnc] }
description: novnc console URLs for the server
1
2
3
4
5
6
7
8
9
10
11
#openstack stack create -t CREATE_VM_VOLUME_AND_ATTACH.yaml \
--parameter volume_type=sata \
--parameter volume_az=nova \
--parameter volume_size=50 \
--parameter image_id=eef3ab2f-3c8f-4530-8806-15aa7603ba2f \
--parameter secgroup_id=de99ad74-aa45-44fc-bee3-0f29c7005abb \
--parameter network_id=92ef8068-46ba-4db7-b5d0-dcef1089efb2 \
--parameter flavor_id=ecs.small \
--parameter server_az=nova \
--parameter server_name=t-server-demo \
heat-test
需求2 - 云盘多台编排

1.在需求1的基础上添加批量创建资源,要求自定义数量
2.要求引入需求1的模板
3.要求返回一个批次stack里面的虚拟机uuid、虚拟机ip和vnc请求地址

这里我们需要学习一个新的资源类型OS::Heat::ResourceGroup,从名称就能够大概猜到该资源的作用:资源组,组内可以包括一个或多个相同的嵌套资源。

Multi_Num_VM_VOLUME_ATTACHE.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
heat_template_version: 2013-05-23

description: Template for Create VM Volume,the Volume will be auto Attached by heat.

parameters:

num_resources:
type: number
description: Numbers of Resrouce
default: 1
constraints:
- range: { min: 1, max: 10 }

image_id:
type: string
description: ID of the image to use for the instance to be created.
constraints:
- custom_constraint: glance.image

secgroup_id:
type: string
description : Id of the security groupe

network_id:
type: string
description: network id

flavor_id:
type: string
description: Flavor for the server to be created
constraints:
- custom_constraint: nova.flavor

server_az:
type: string
description: Availablity Zone of the VM
default: nova

volume_size:
type: number
description: Size of volume to attach to VM
default: 1
constraints:
- range: { min: 1, max: 200 }

volume_type:
type: string
description: If specified, the type of volume to use, mapping to a specific backend.

volume_az:
type: string
description: Availability Zone of the Volumes


resources:
resgroup:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: num_resources }
resource_def:
type: CREATE_VM_VOLUME_AND_ATTACH.yaml #在这里引入了需求1的模板
properties:
server_name: heat-vm-%index%
image_id: { get_param: image_id }
secgroup_id: { get_param: secgroup_id }
network_id: { get_param: network_id }
flavor_id: { get_param: flavor_id }
server_az: { get_param: server_az }
volume_size: { get_param: volume_size }
volume_type: { get_param: volume_type }
volume_az: { get_param: volume_az }

outputs:
myrefs:
value: {get_attr: [resgroup, refs]}
server_ids:
value: {get_attr: [resgroup, server_id]}
server_ips:
value: {get_attr: [resgroup, server_ip]}
server_novnc_urls:
value: {get_attr: [resgroup, novnc_console_url]}
1
2
3
4
5
6
7
8
9
10
11
#openstack stack create -t Multi_Num_VM_VOLUME_ATTACH.yaml \
--parameter volume_type=sata \
--parameter volume_az=nova \
--parameter volume_size=50 \
--parameter image_id=eef3ab2f-3c8f-4530-8806-15aa7603ba2f \
--parameter secgroup_id=de99ad74-aa45-44fc-bee3-0f29c7005abb \
--parameter network_id=92ef8068-46ba-4db7-b5d0-dcef1089efb2 \
--parameter flavor_id=ecs.small \
--parameter server_az=nova \
--parameter num_resources=3 \
heat-test2

需求3 - 本地盘单台编排

CREATE_VM_LOCAL_STORAGE.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
heat_template_version: 2013-05-23

description: Template for Create VM

parameters:
image_id:
type: string
description: Image ID or image name to use for the server
constraints:
- custom_constraint: glance.image

secgroup_id:
type: string
description : Id of the security groupe

network_id:
type: string
description: network id

flavor_id:
type: string
description: Flavor for the server to be created
constraints:
- custom_constraint: nova.flavor

server_az:
type: string
description: Availablity zone of the VM
default: nova
server_name:
type: string
description: VM Name
default: heat-stack-default
resources:
server:
type: OS::Nova::Server
properties:
availability_zone: { get_param: server_az }
image: { get_param: image_id }
flavor: { get_param: flavor_id }
networks:
- network: { get_param: network_id }
security_groups:
- { get_param: secgroup_id }


outputs:
server_id:
value: { get_resource: server }
server_ip:
description: Network IP address of server
value: { get_attr: [ server, first_address ] }
novnc_console_url:
value: { get_attr: [server, console_urls, novnc] }
description: novnc console URLs for the server
1
2
3
4
5
6
7
8
#openstack stack create -t CREATE_VM_LOCAL_STORAGE.yaml \
--parameter image_id=eef3ab2f-3c8f-4530-8806-15aa7603ba2f \
--parameter secgroup_id=de99ad74-aa45-44fc-bee3-0f29c7005abb \
--parameter network_id=92ef8068-46ba-4db7-b5d0-dcef1089efb2 \
--parameter flavor_id=ecs.small_L \
--parameter server_az=nova \
--parameter server_name=t-server-demo-local \
heat-local-test

需求4 - 本地盘多台编排

Multi_Num_VM_LOCAL_STORAG.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
heat_template_version: 2013-05-23

description: Template for Create VM Volume,the Volume will be auto Attached by heat.

parameters:

num_resources:
type: number
description: Numbers of Resrouce
default: 1
constraints:
- range: { min: 1, max: 10 }

image_id:
type: string
description: ID of the image to use for the instance to be created.
constraints:
- custom_constraint: glance.image

secgroup_id:
type: string
description : Id of the security groupe

network_id:
type: string
description: network id

flavor_id:
type: string
description: Flavor for the server to be created
constraints:
- custom_constraint: nova.flavor

server_az:
type: string
description: Availablity Zone of the VM
default: nova


resources:
resgroup:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: num_resources }
resource_def:
type: CREATE_VM_LOCAL_STORAGE.yaml #在这里引入了需求1的模板
properties:
server_name: heat-vm-local-%index%
image_id: { get_param: image_id }
secgroup_id: { get_param: secgroup_id }
network_id: { get_param: network_id }
flavor_id: { get_param: flavor_id }
server_az: { get_param: server_az }

outputs:
myrefs:
value: {get_attr: [resgroup, refs]}
server_ids:
value: {get_attr: [resgroup, server_id]}
server_ips:
value: {get_attr: [resgroup, server_ip]}
server_novnc_urls:
value: {get_attr: [resgroup, novnc_console_url]}
1
2
3
4
5
6
7
8
#openstack stack create -t Multi_Num_VM_LOCAL_STORAG.yaml  \
--parameter image_id=eef3ab2f-3c8f-4530-8806-15aa7603ba2f \
--parameter secgroup_id=de99ad74-aa45-44fc-bee3-0f29c7005abb \
--parameter network_id=92ef8068-46ba-4db7-b5d0-dcef1089efb2 \
--parameter flavor_id=ecs.small_L \
--parameter server_az=nova \
--parameter num_resources=3 \
heat-local-test2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
JSON post
{
"stack_name": "heat-local-test2",
"disable_rollback": true,
"parameters": {
"image_id": "eef3ab2f-3c8f-4530-8806-15aa7603ba2f",
"secgroup_id": "de99ad74-aa45-44fc-bee3-0f29c7005abb",
"network_id": "92ef8068-46ba-4db7-b5d0-dcef1089efb2",
"flavor_id": "ecs.small_L",
"server_az": "nova",
"num_resources": "3"
},
"template": {
"heat_template_version": "2013-05-23",
"description": "Template for Create VM Volume,the Volume will be auto Attached by heat.",
"parameters": {
"num_resources": {
"type": "number",
"description": "Numbers of Resrouce",
"default": 1,
"constraints": [{
"range": {
"min": 1,
"max": 10
}
}]
},
"image_id": {
"type": "string",
"description": "ID of the image to use for the instance to be created.",
"constraints": [{
"custom_constraint": "glance.image"
}]
},
"secgroup_id": {
"type": "string",
"description": "Id of the security groupe"
},
"network_id": {
"type": "string",
"description": "network id"
},
"flavor_id": {
"type": "string",
"description": "Flavor for the server to be created",
"constraints": [{
"custom_constraint": "nova.flavor"
}]
},
"server_az": {
"type": "string",
"description": "Availablity Zone of the VM",
"default": "nova"
}
},
"resources": {
"resgroup": {
"type": "OS::Heat::ResourceGroup",
"properties": {
"count": {
"get_param": "num_resources"
},
"resource_def": {
"type": "file:///opt/kolla-ansible-deploy/openstack/qcloud/dev/bjyt_region_1/CREATE_VM_LOCAL_STORAGE.yaml",
"properties": {
"image_id": {
"get_param": "image_id"
},
"secgroup_id": {
"get_param": "secgroup_id"
},
"network_id": {
"get_param": "network_id"
},
"flavor_id": {
"get_param": "flavor_id"
},
"server_az": {
"get_param": "server_az"
}
}
}
}
}
},
"outputs": {
"myrefs": {
"value": {
"get_attr": ["resgroup", "refs"]
}
},
"server_ids": {
"value": {
"get_attr": ["resgroup", "server_id"]
}
},
"server_ips": {
"value": {
"get_attr": ["resgroup", "server_ip"]
}
},
"server_novnc_urls": {
"value": {
"get_attr": ["resgroup", "novnc_console_url"]
}
}
}
},
"files": {
"file:///opt/kolla-ansible-deploy/openstack/dev/bjyt_region_1/CREATE_VM_LOCAL_STORAGE.yaml": "{\"heat_template_version\": \"2013-05-23\", \"description\": \"Template for Create VM\", \"parameters\": {\"image_id\": {\"type\": \"string\", \"description\": \"Image ID or image name to use for the server\", \"constraints\": [{\"custom_constraint\": \"glance.image\"}]}, \"secgroup_id\": {\"type\": \"string\", \"description\": \"Id of the security groupe\"}, \"network_id\": {\"type\": \"string\", \"description\": \"network id\"}, \"flavor_id\": {\"type\": \"string\", \"description\": \"Flavor for the server to be created\", \"constraints\": [{\"custom_constraint\": \"nova.flavor\"}]}, \"server_az\": {\"type\": \"string\", \"description\": \"Availablity zone of the VM\", \"default\": \"nova\"}}, \"resources\": {\"server\": {\"type\": \"OS::Nova::Server\", \"properties\": {\"availability_zone\": {\"get_param\": \"server_az\"}, \"image\": {\"get_param\": \"image_id\"}, \"flavor\": {\"get_param\": \"flavor_id\"}, \"networks\": [{\"network\": {\"get_param\": \"network_id\"}}], \"security_groups\": [{\"get_param\": \"secgroup_id\"}]}}}, \"outputs\": {\"server_id\": {\"value\": {\"get_resource\": \"server\"}}, \"server_ip\": {\"description\": \"Network IP address of server\", \"value\": {\"get_attr\": [\"server\", \"first_address\"]}}, \"novnc_console_url\": {\"value\": {\"get_attr\": [\"server\", \"console_urls\", \"novnc\"]}, \"description\": \"novnc console URLs for the server\"}}}"
},
"environment": {}
}

更多

经过以上两个实例,我相信大多数人对Heat编排都有一个大致了解,其实编排不难,难的是资源逻辑的整合。现在社区里面有一些开源的Heat模板,里面几乎包含了大部分的高级用法。感兴趣的朋友可以去了解下

1
$ git clone git://git.openstack.org/openstack/heat-templates

参考:https://www.jianshu.com/p/1f2482792683

kata-与原生docker集成使用

使用kata container 与docker集成

⒈KataContainers?

  Kata Containers是新的虚拟机实现,可以实现和现在容器生态无缝连接,与时下最流行的容器编排工具k8s完美结合,提供容器的快速启动,和虚拟机的安全隔离,与Docker技术相比,容器之间不共用内核,使得隔离性更好。

  Kata Containers 项目的主要目标是将虚拟化的安全隔离优势和容器的快速启动特点结合起来。

⒉即生Docker,何来Kata Containers?
  Linux 容器轻巧,快速且易于集成到许多不同的应用程序工作流程中。但是,在运行容器时存在一些潜在的安全问题,特别是在单个操作系统中的多租户容器:最终,容器共享一个内核、 I / O 的一条路径、网络和内存等。

  使用Docker轻量级的容器时,最大的问题就是会碰到安全性的问题,其中几个不同的容器可以互相的进行攻击,如果把这个内核给攻掉了,其他所有容器都会崩溃。如果使用KVM等虚拟化技术,会完美解决安全性的问题,但是会影响速度。

  Kata旨在通过虚拟机管理程序来缓解这种安全问题——创建一个外观和感觉像容器的虚拟机。
  Kata Containers项目通过整合Intel Clear Containers和Hyper runV技术,能够支持不同平台的硬件,并且兼容Open Container Initiative(OCI)和Kubernetes container runtime interface(CRI)接口规范。Kata Containers项目现在由OpenStack基金会管理,代码托管在Github(https://github.com/kata-containers)上。

⒊使用Kata Containers替换Docker?

  从docker架构上看,kata-container和原来的runc是平级的。大家知道docker只是管理容器生命周期的框架,真正启动容器最早用的是LXC,然后是runc,现在也可以换成kata了。所以说kata-container可以当做docker的一个插件,启动kata-container可以通过docker命令。Kata最大的亮点是解决了传统容器共享内核的安全和隔离问题,办法是让每个容器运行在一个轻量级的虚拟机中,使用单独的内核。

img

img

⒋现在开始?

  ⒈安装Kata Containers容器包(以Centos 7 x86_64为例)

这里使用snap部署

1
2
3
4
#yum install epel-release
#yum install snapd
#systemctl enable --now snapd.socket
#ln -s /var/lib/snapd/snap /snap
1
2
3
4
#snap install kata-containers --classic
遇到报错:error: too early for operation, device not yet seeded or device model not acknowledged
再执行一次 #yum -y install snapd 就可以了
#snap install kata-containers --classic

配置Kata Container

默认情况下Kata container以只读文件系统形式挂载在/snap/kata-containers路径下,因此默认配置文件不能修改,但是kata-runtime支持从指定路径加载配置文件而不是从默认路径,使用以下命令修改:

1
2
3
#mkdir -p /etc/kata-containers
#cp /snap/kata-containers/current/usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers/
#cat /etc/kata-containers/configuration.toml

可以通过修改/etc/kata-containers/configuration.toml文件来更改kata container的配置,暂时我们不修改

更改Docker运行时

在更改Docker运行时前,检查一下Docker默认的运行时,运行命令docker info | grep -i runtime,结果如下所示,现有可供选择的运行时仅有runc,默认的运行时也为runc

1
2
3
$ docker info | grep -i runtime
Runtimes: runc
Default Runtime: runc

要将runc更换为kata container提供的kata-runtime,使用以下两种方法之一进行替换:

注意官方文档上给出的文件路径为/usr/bin/kata-runtime,这是通过kata-manager安装kata container时的路径,通过snap安装的kata container,路径应为/snap/kata-containers/current/usr/bin/kata-runtime

:one: 通过systemd文件夹,具体命令为:

1
2
3
4
5
6
#mkdir -p /etc/systemd/system/docker.service.d/
#cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/snap/kata-containers/current/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF

:two: 通过docker daemon.json文件,在/etc/docker/daemon.json文件中添加如下内容:

1
2
3
4
5
6
7
8
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/snap/kata-containers/current/usr/bin/kata-runtime"
}
}
}

这里选择看起来更简单些的方式2,我是使用的第一种,都可以,配置好后还需要重启Docker服务使之生效,运行如下命令:

1
2
#systemctl daemon-reload
#systemctl restart docker

此时再次查看docker运行时配置,发现可供的选择多了一个kata-runtime,并且默认为kata-runtime,如下所示:

1
2
3
$ docker info | grep -i runtime
Runtimes: kata-runtime runc
Default Runtime: kata-runtime

运行kata-runtime docker

拉取测试镜像busybox:docker pull busybox 运行测试:

#docker run busybox uname -a

默认的运行时已经改为为kata-runtime,如果想要使用之前的runc,可以通过参数–runtime=runc`来指定:

#docker run –runtime=runc busybox uname -a`

卸载Kata Container

运行命令:

#snap remove kata-containers
删除相关的配置文件:

#rm -r /etc/kata-containers

#rm /etc/docker/daemon.json`

参考:https://snapcraft.io/install/kata-containers/centos

kata-极速部署kata container

快速部署kata container

1
2
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy.yaml

清理kata

1
2
3
4
5
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-cleanup.yaml

$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac.yaml

go-gorm使用json字段

MySQL到现如今已经是原生支持Json类型(5.7.8版本之后),毕竟JSON存储一直都是NoSql玩的溜,触不及防,MySQL里头也能支持了,且能对JSON数据进行相对的操作。

NoSql中,对JSON数据天生友好,都算是不固定(弱结构)的数据存储。而MySQL中,字段都是先天所定(强结构),后续更改会相对别扭一些(相对于NoSql型数据库)。

Go语言中,我们一般使用Gorm和原生SQL对数据库进行操作,虽然中大型项目中,大多前辈以及一些DBA都会推荐使用原生SQL,这样的好处是防止中间层出现意外的错误,以及排查问题方便性。特别的,如果遇到性能问题,使用orm会造成排查问题的一些复杂性。

当然,我要是为了开发的便利和速度,一般都会选择Gorm

由于Gorm原生并不支持JSON数据结构,需要添加一些方法才能正常操作JSON字段。

JSON字段的支持

Gorm需要加入两个函数:ValueScan

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
type Demo struct {
id string
obj DemoObj `sql:"TYPE:json"`
}

struct DemoObj struct {
c1 string
c2 int
c3 bool
}

func (c DemoObj) Value() (driver.Value, error) {
b, err := json.Marshal(c)
return string(b), err
}

func (c *DemoObj) Scan(input interface{}) error {
return json.Unmarshal(input.([]byte), c)
}

以下是对数组的支持。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
/****************使gorm支持[]string结构*******************/
type Strings []string

func (c *Strings) Value() (driver.Value, error) {
b, err := json.Marshal(c)
return string(b), err
}

func (c *Strings) Scan(input interface{}) error {
return json.Unmarshal(input.([]byte), c)
}

/****************使gorm支持[]string结构*******************/

/****************使gorm支持[]int64结构*******************/
type Int64s []int64

func (c Int64s) Value() (driver.Value, error) {
b, err := json.Marshal(c)
return string(b), err
}

func (c *Int64s) Scan(input interface{}) error {
return json.Unmarshal(input.([]byte), c)
}

/****************使gorm支持[]int64结构*******************/

注意

​ Scan 方法的实例需要是指针类型,不然会select不到数据

openstack-记一次虚机代理网关网络不通问题排查

背景:VPC网络内,用户两台虚机,VM1 138,VM2 181,用户需要把138流量牵引至181,所以将138 网关改成 181,当客户端 138 发起外网ping 114.114.114.114 时,无法联网

排查原因:

  1. 通过tcpdump抓网关网卡包

​ 网关181回包后138 收不到

![/images/openstack/image-20210220172329798.png)

  1. 抓取网关虚机的宿主机包发现,只回了一个包(tap网卡包回了)

    #tcpdump -i any -nn icmp and host 114.114.114.114

    所以怀疑安全组限制了(包只过了tap网卡后被丢弃)

    追踪这块网卡的iptables

    1
    2
    3
    4
    5
    6
    #iptables -nvL neutron-openvswi-ib8096cab-2
    Chain neutron-openvswi-sb8096cab-2 (1 references)
    pkts bytes target prot opt in out source destination
    3818 651K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
    3232K 229M RETURN all -- * * 192.168.123.181 0.0.0.0/0 MAC FA:16:3E:29:77:83 /* Allow traffic from defined IP/MAC pairs. */
    5558 381K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* Drop traffic without an IP/MAC allow rule. */

    发现DROP的大小一直在变大,定位到问题了,那原因是什么呢?

因为 neutron默认防止arp欺骗,只允许源IP和MAC是自己的包才放行,而用户自己做了网关,导致源IP变了,所以被DROP了。

解决办法:

1
2
3
1. 放行包  (不能永久生效)
#iptables -t filter -I neutron-openvswi-sb8096cab-2 -s 0.0.0.0/0 -j RETURN

1
2
3
2. 把port的安全组关掉
#neutron port-update b8096cab-2536-4d0d-851a-1056b81f11ca --no-security-groups
#neutron port-update b8096cab-2536-4d0d-851a-1056b81f11ca --port_security_enabled=False

go-100道灵魂拷问

  1. map如何顺序读取

    map不能顺序读取,是因为他是无序的,想要有序读取,首先的解决的问题就是,把 key 变为有序,所以可以把key放入切片,对切片进行排序,遍历切片,通过key取值。