KEMBAR78
Harmonia open iris_basic_v0.1 | PPTX
1 
2 
3 
4 
Introduce 
Architecture 
OpenStack 
IRIS Neutron Plugin 
5 
6 
7 
IRIS ML2 Mechanism Driver 
IRIS Virtual Network Module 
Todo
Introduce 
Harmonia 
• 코드네임 : 하르모니아 (Harmonia) 
 개발 코드 네임 
 정식 명칭 : IRIS-pNaaS 
• Harmonia Logo 
• Harmonia ? 
 전쟁의 신(아레스)과 미와 사랑의 여신(아프로디테) 사이에서 태어난 그리스 여신으 
로 ‘조화’를 의미함 
 음악 용어인 하모니(Harmony)의 어원 
 Virtual Network의 조화를 이루고자 하는 의미
Introduce 
Keywords : SIA (Swift, Inexpensive, Automation) 
• Swift 
• Inexpensive 
• Automation 
We can create virtual networks, make swift networks and low-price. 
Don’t worry about! It’s automation.
개념도 
VM2 VM3 
VM5 VM6 
VM1 Tenant A VM4 
Tenant B 
REST API 
VM2 
VM4 VM5 
Compute Node Compute Node 
Network Node 
Control Node 
Compute Node 
Compute Node 
VM1 
VM3 
VM6 
OpenFlow
OpenStack Concept Architecture 
http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_overview.html
Control Node, Network Node, Compute Node 
http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_overview.html 
SDN Controller
구성도 (서버 랙) 
eth0 eth1 
eth2 
eth2 
eth0 eth0 eth1 eth1 eth0 
Data Network 
Management Network 
External Network 
OF Switch 
eth1 
Control Node Network Node 
Compute Node 
OpenFlow
구성도 (서버 랙 - 실사) 
Control 
Node 
Management Network Hub 
- OS : Ubuntu 14.04 , Fedora, etc… 
- OVS : version 2.0 이상 
* 주의 : Linux Kernel 과 dependency 
- 네트워크 인터페이스 : 2 
* eth0 : Management Network (사설 IP) 
* eth1 : 외부망 연동 (공인 IP) 
Network 
Node 
- OS : Ubuntu 14.04 , Fedora, etc… 
- OVS : version 2.0 이상 
* 주의 : Linux Kernel 과 dependency 
- 네트워크 인터페이스 : 3 이상 
* eth0 : Management Network (사설 IP) 
* eth1 : Data Network (사설 IP) 
* eth2 : 외부망 연동 (공인 IP) 
Compute 
Node 
- OS : Ubuntu 14.04 , Fedora, etc… 
- OVS : version 2.0 이상 
* 주의 : Linux Kernel 과 dependency 
- 네트워크 인터페이스 : 3 이상 
* eth0 : Management Network (사설 IP) 
* eth1 : Data Network (사설 IP) 
* eth2 : 외부망 연동 (공인 IP) 
① 
② 
③ 
① 
② 
③ 
Control Node 
Network Node 
Compute Node 
Compute Node 
Compute Node 
Compute Node 
Compute Node 
Compute Node 
Compute Node 
OpenFlow 
Hub Switch 
Management Network (일반 Hub) 
Data Network (OpenFlow SW) 
+ SDN Controller 
Data Network Switch (OpenFlow SW) 
- OF Switch, OVS, OpenWRT, … 
- Connect SDN Controller
Overview & Features 
Overview 
• A virtual switch or Virtual Ethernet Bridge (VEB) 
• A key component of networking for virtualized computing 
• User-space : configuration, control 
• Kernel-space : datapath (include in main Linux kernel since version 3.3) 
• Cisco Nexus 1000v, VMware vDS, IBM DVS 5000v, MS Hyper-V vSwitch 
Features 
• Visibility into inter-VM communication via NetFlow, sFlow®, IPFIX, SPAN, LACP (IEEE 802.1AX-2008) 
• Standard 802.1Q VLAN model with trunking 
• STP (IEEE 802.1D-1998), Fine-grained QoS control 
• NIC bonding with source-MAC load balancing, active backup, and L4 hashing 
• OpenFlow protocol support (including many extensions for virtualization) 
• Multiple tunneling protocols (VXLAN, Ethernet over GRE, CAPWAP, Ipsec, GRE over Ipsec)
Open vSwitch Architecture 
ovs-vsctl 
ovsdb-client 
ovs-appctl ovs-dpctl 
ovs-brcompatd ovs-vswitchd 
brcompat.ko 
openvswitch.ko 
Kernel Datapath (Fast Path) 
Kernel space user space 
ovsdb-server 
Netlink 
tap 
Remote 
Open vSwitch db 
OpenFlow 
Controller 
ovs-ofctl 
VM 
vNIC 
OVS Management 
(JSON RPC) 
OpenFlow
Open vSwitch Architecture 
br-ovs 
VM VM 
vNIC vNIC 
vnet0 
tap2 
vnet1 
Packet flows 
eth0 
tap1 
Port 
Flow Table 
Bridge 
Interface 
bond0 eth2 
eth1 eth2
Open vSwitch Architecture 
ovs-vswitchd 
• a daemon that implements the switch, along with a companion Linux kernel module for flow-based switching 
ovsdb-server 
• a lightweight database server that ovs-vswitchd queries to obtain configuration 
ovs-vsctl 
• a utility for querying and updating the configuration of ovs-vswitchd 
ovs-dpctl 
• a tool for configuring and monitoring the switch kernel module 
ovs-appctl 
• a utility that sends commands to running Open vSwitch daemons (ovs-vswitchd) 
ovs-controller 
• a simple OpenFlow controller reference implementation 
brocompat.ko 
• Linux bridge compatibility module 
openvswitch.ko 
• Open vSwitch switching datapath
Open vSwitch Configuration 
Table Purpose 
Open_vSwitch Open vSwitch configuration 
Bridge Bridge configuration 
Port Port configuration 
Interface One physical network device in a Port 
QoS Quality of Service configuration 
Queue QoS output queue 
Mirror Port mirroring 
Controller OpenFlow controller configuration 
Manager OVSDB management connection 
NetFlow NetFlow configuration 
SSL SSL configuration 
sFlow sFlow configuration 
Capability Capability configuration
Open vSwitch Configuration sample 
$ sudo ovs-vsctl show 
225d73cc-15b3-4db5-9b45-e783f7c49a10 
Bridge br-tun 
Port "gre-3" 
Interface "gre-3" 
type: gre 
options: {in_key=flow, out_key=flow, remote_ip="192.168.0.10"} 
Port br-tun 
Interface br-tun 
type: internal 
Port patch-int 
Interface patch-int 
type: patch 
options: {peer=patch-tun} 
Bridge br-int 
Port "tap1" 
tag: 1 
Interface "tap1" 
Port "tap2" 
tag: 1 
Interface "tap2" 
Port br-int 
Interface br-int 
type: internal 
Port patch-tun 
Interface patch-tun 
type: patch 
options: {peer=patch-int}
Open vSwitch Configuration sample 
Linux Networking Stack 
VM VM 
vNIC vNIC 
br-int 
Eth0 
tap1 
External IP 
gre3 
Eth2 
192.168.10.20 
Eth1 
192.168.0.20 
tap2 
br-tun 
patch-tun patch-int 
GRE Tunnel 
192.168.0.10
Open vSwitch Demo 
Switch 
eth0 
VM VM 
vNIC vNIC 
tap1 
tap2 
OpenvSwitch Bridge 
eth1 
VM VM 
vNIC vNIC 
gre-1 OpenvSwitch Bridge 
Switch 
tap1 
eth1 
tap2 
eth0 
GRE tunnel gre-1 
External network 
Tunneling network 
192.168.0.0/24
Virtual Network - OpenStack 
Control Node 
Nova Keystone 
Glance Horizon 
eth1 
eth0 
External network 
eth2 
eth0 
Network Node 
Neutron Server 
Neutron L3-Agent 
eth1 
Management network 192.168.0.0/24 
eth2 
Compute Node 01 
eth1 
eth0 
eth2 
Compute Node 02 
eth1 
eth0 
eth2 
Neutron Agent 
Neutron OpenvSwitch 
Plug-in 
Nova Compute 
Neutron Agent 
Neutron OpenvSwitch 
Plug-in 
Nova Compute 
Data network 192.168.10.0/24
eth0 
Network Node 
br-ex 
qg~~ 
br-int 
qr~~ 
eth1 
gre-1 
eth0 
Compute Node 01 
tap1 
eth1 
tap2 
VM VM 
GRE tunnel gre-1 
Virtual Network - OpenStack 
tap~~ 
br-tun 
gre-2 gre-2 
br-tun 
br-int 
Tunnel <-> Compute Node 02 
• qg~~ : external gateway interface 
• qr~~ : virtual router interface 
• tap~~ : network service interface (DHCP, DNS, …)
Overview & Features 
Overview 
• A virtual switch or Virtual Ethernet Bridge (VEB) 
• A key component of networking for virtualized computing 
• User-space : configuration, control 
• Kernel-space : datapath (include in main Linux kernel since version 3.3) 
• Cisco Nexus 1000v, VMware vDS, IBM DVS 5000v, MS Hyper-V vSwitch 
Features 
• Visibility into inter-VM communication via NetFlow, sFlow®, IPFIX, SPAN, LACP (IEEE 802.1AX-2008) 
• Standard 802.1Q VLAN model with trunking 
• STP (IEEE 802.1D-1998), Fine-grained QoS control 
• NIC bonding with source-MAC load balancing, active backup, and L4 hashing 
• OpenFlow protocol support (including many extensions for virtualization) 
• Multiple tunneling protocols (VXLAN, Ethernet over GRE, CAPWAP, Ipsec, GRE over Ipsec)
Layer Diagram 
Core REST API 
Extension A 
REST API 
Extension … 
REST API 
Extension N 
REST API 
Auth N / Auth Z / Input Validation/Output view 
Core Plugin 
Interface 
Service A Plugin 
Interface 
Service … Plugin 
Interface 
Service N Plugin 
Interface 
Core Plugin 
(Vendor specific) 
Service A Plugin Service N Plugin Agents
Application and filters 
[composite:neutron] 
use = egg:Paste#urlmap 
/: neutronversions 
/v2.0: neutronapi_v2_0 
[composite:neutronapi_v2_0] 
use = call:neutron.auth:pipeline_factory 
keystone = authtoken keystonecontext extensions neutronapiapp_v2_0 
[filter:keystonecontext] 
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory 
[filter:authtoken] 
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory 
[filter:extensions] 
paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory 
[app:neutronversions] 
paste.app_factory = neutron.api.versions:Versions.factory 
[app:neutronapiapp_v2_0] 
paste.app_factory = neutron.api.v2.router:APIRouter.factory
neutron/server/__init__.py: main() 
config.parse(sys.argv[1:]) # --config-file neutron.conf --config-file XXXXX.ini 
neutron/common/config.py 
def load_paste_app(app_name) # Name of the application to load ex) def load_paste_app(“neutron”) 
• neutron/auth.py 
def pipeline_factory(loader, global_conf, **local_conf): 
• neutron/api/v2/router.py 
class APIRouter(wsgi.router): 
def factory(cls, global_config, **local_config): 
• neutron/api/extensions.py 
def plugin_aware_extension_middleware_factory(global_config, **local_config): 
neutron/auth.py 
class NeutronKeystoneContext(wsgi.Middleware):
pipeline 
URL 
request 
authtoken 
keystonecontext 
extensions 
URL is 
declared 
here? 
Process 
Response 
neutronapiapp_v2_0 
URL is 
declared 
here? 
Process 
No 
No, return 
HTTPNotFound
neutron/api/v2/router.py : APIRouter.factory() 
1. __init__() 
1.1 plugin = manager.NeutronManager.get_plugin( ) 
1.1.1 neutron/manager.py : __init__( ) 
1.1.1.1 def _create_instance( ) #create core plugin instance 
1.1.1.2 def _load_service_plugins( ) #load plugin service 
load plugins 
neutron/neutron.conf 
service_plugins = … 
core_plugin = ml2 
NeutronManager : service_plugins = {“CORE”: ml2, “LOADBALANCER”: xxx, …}
What are plugins & extensions 
extensions are about resources and the actions on them 
• neutron/plugins/cisco|vmware|nuage/extensions/xxx.py 
@classmethod 
def get_resources(cls): 
for resource_name in [‘router’, ‘floatingip’]: 
… 
controller = base.create_resource (collection_name, resource_name, plugin…) 
ex = ResourceExtension(collection_name, controller, member_actions…) 
Plugins are used to support the resources 
• neutron/services/l3_router/l3_router_plugin.py 
• neutron/plugins/bigswitch/plugin.py 
supported_extension_aliases = [“router”, “ext-gw-mode”, “extraroute”, “l3_agent_scheduler”] 
• neutron/extensions/l3.py 
• neutron/plugins/bigswitch/plugin.py 
def update_router(self, context, id, router): 
• neutron/extensions/l3.py 
• neutron/plugins/bigswitch/routerrule_db.py 
def get_router(self, context, id, fields=None):
neutronapiapp_v2_0: load extensions 
neutron/api/v2/router.py: APIRouter.factory() 
• __init__( ) 
1.1 plugin = manager.NeutronManager.get_plugin() 
1.2 ext_mgr = extensions.PluginAwareExtensionManager.get_instance() 
1.2.1 neutron/api/extensions.py : def get_extensions_path() 
1.2.2 neutron/api/extensions.py : class PluginAwareExtensionManager(ExtensionManager): 
__init__(paths, plugins) 
1.2.2.1 neutron/api/extensions.py : def _load_all_extensions(self): 
self._load_all_extensions_from_path(path) 
1.2.2.2 neutron/api/extensions.py : def _load_all_extensions(self, path): 
… 
self.add_extension(new_ext) 
1.2.2.3 neutron/api/extensions.py : def add_extension(self, ext): 
… 
self._check_extension(ext): 
neutron standard extension 
plus ones specified by 
api_extension_path= 
in 
neutron.conf 
check each python module 
name under the path, and 
capitalize the first letter of 
the module name to find 
the class in it, excluding the 
modules starting with “_”. 
1. 각 플러그인 마다 체크 
(supported_extension_aliases) 
2. check if the potential extension has implemented 
the needed functions. 
3. check if one of plugins supports it. plugin’s 
supported_extension_aliases attribute defines 
what extensions it supports.
neutronapiapp_v2_0: install core resource 
neutron/api/v2/router.py: APIRouter.factory() 
• __init__( ) 
1.1 plugin = manager.NeutronManager.get_plugin() 
1.2 ext_mgr = extensions.PluginAwareExtensionManager.get_instance() 
1.3 install core resources 
1.3.1 neutron/api/v2/router.py 
RESOURCES = {‘network’: ‘networks’, ‘subnet’: ‘subnets’, ‘port’: ‘ports’}
extension filter: assemble extensions 
neutron/api/extension.py 
• def plugin_aware_extension_middleware_factory(global_config, **local_config) 
1.1 def _factory(app): 
ext_mgr = PluginAwareExtensionManager.get_instance() 
return ExtensionMiddleware(app, ext_mgr=ext_mgr) 
:ExtensionMiddleware :PluginAwareExtensionManager :ExtensionDescriptor 
1. __init__(application, ext_mgr) 
1.1 get_resource() 
[for each extension] 
1.1.1 get_resources() 
Loop 
1.2 install route objects
URL processing (1/2) 
Resource:Resource :TextDeserializer :DictSerializer :Control Node 
1: HTTP URL 
1.1: __init__ 
1.2: deserialize (data string) 
1.3: getattr (action) 
1.4: create | update | show | index | delete 
1.5: serialize (data)
URL processing (2/2) 
:Control Node 
1.4: create | update | show | index | delete 
plugin:Plugin 
1.4.1: calculate Plugin handler (action) 
1.4.2: authz/input validation 
1.4.3: (handler_fun} 
1.4.4: _send_dhcp_notification (context, data, methodname) 
1.4.5: _view_(context, data, fields_to_strip) 
Notification to 
ceilometer also 
happens here 
Action is link create, 
update, show, index or 
delete 
Handler_fun is like create_net, 
list_nets function of plugins
Setup.cfg <ml2 Setup> 
neutron.ml2.type_drivers = 
flat = neutron.plugins.ml2.drivers.type_flat:FlatTypeDriver 
local = neutron.plugins.ml2.drivers.type_local:LocalTypeDriver 
vlan = neutron.plugins.ml2.drivers.type_vlan:VlanTypeDriver 
gre = neutron.plugins.ml2.drivers.type_gre:GreTypeDriver 
vxlan = neutron.plugins.ml2.drivers.type_vxlan:VxlanTypeDriver 
neutron.ml2.mechanism_drivers = 
linuxbridge = neutron.plugins.ml2.drivers.mech_linuxbridge:LinuxbridgeMechanismDriver 
openvswitch = neutron.plugins.ml2.drivers.mech_openvswitch:OpenvswitchMechanismDriver 
hyperv = neutron.plugins.ml2.drivers.mech_hyperv:HypervMechanismDriver 
ncs = neutron.plugins.ml2.drivers.mechanism_ncs:NCSMechanismDriver 
arista = neutron.plugins.ml2.drivers.mech_arista.mechanism_arista:AristaDriver 
cisco_nexus = neutron.plugins.ml2.drivers.cisco.mech_cisco_nexus:CiscoNexusMechanismDriver 
l2population = neutron.plugins.ml2.drivers.l2pop.mech_driver:L2populationMechanismDriver 
…
ml2.ini <ml2 설정 파일> 
neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/ml2.ini 
[ml2] 
type_drivers = local,flat,vlan,gre,vxlan 
mechanism_drivers = openvswitch,linuxbridge 
tenant_network_types = vlan,gre,vxlan 
[ml2_type_flat] 
flat_networks = physnet1,physnet2 
[ml2_type_vlan] 
network_vlan_ranges = physnet1:1000:2999,physnet2 
[ml2_type_gre] 
tunnel_id_ranges = 1:1000 
[ml2_type_vxlan] 
vni_ranges = 1001:2000
__init__ : neutron manager (server) 
neutron/manager.py: __init__() 
• Create core plugin instance [core_plugin=] 
Ml2 plugin :TypeManager :TypeDriver :MechanismManager :MechanismDriver 
1: __init__() 
1.1: initialize() 
loop 
[loop on drivers] 
1.1.1: initialize() 
[loop on ordered_mech_drivers] 
1.2.1: initialize() 
loop 
1.2: initialize() 
1.3: _setup_rpc() 
ml2.ini를 통하여 어떠한 
드라이버를 사용할 것인지 
읽고 환경을 설정함
Ml2 RPC structure 
SecurityGroupServerRpcCallbackMixin() 
: neutron/db/securitygroups_rpc_base.py 
DhcpRpcCallbackMixin() 
: neutron/db/dhcp_rpc_base.py 
TunnelRpcCallbackMixin() 
: neutron/plugins/ml2/drivers/type_tunnel.py 
RpcCallbacks 
: neutron/plugins/ml2/rpc.py 
AgentNotifierApi() 
: neutron/plugins/각 플러그인 마다 구현 
Ml2Plugin 
TunnelAgentRpcApiMixin 
: neutron/SecurityGroupAgentRpcApiMixin plugins/ml2/drivers/type_tunnel.py 
: neutron/agent/securitygroups_rpc.py 
callbacks 
notifier 
DHCP Agent에서 
RPC 처리 
L2 Agent 에서 
Notifi
RPC of L2 agent: ovs neutron agent 
SecurityGroupAgentRpcApiMixin 
: neutron/agent/securitygroups_rpc.py 
+ security_groups_rule_updated(context, kwargs **) 
+ security_groups_member_updated(context, kwargs **) 
+ security_groups_provider_updated(context, kwargs **) 
OVSNeutronAgent 
: neutron/plugins/각 플러그인의 Agent 
+ network_delete(context, kwargs **) 
+ port_update(context, kwargs **) 
+ tunnel_update(context, kwargs **) 
OVSPluginApi 
: neutron/plugins/각 플러그인의 Agent를 통해 제공 
PluginApi 
: neutron/plugins/각 플러그인의 Agent를 통해 제공 
아래는 neutron/agent/rpc.py 
+ get_device_details(…, device, agent_id) 
+ update_device_down(…, agent_id, host=none) 
+ update_device_up(…, agent_id, host=none) 
+ tunnel_sync(…, tunnel_ip, tunnel_type=None) 
SecurityGroupServerRpcApiMixin 
: neutron/db/securitygroups_rpc_base.py 
+ security_group_rules_for_devices(…) 
plugin_rpc 
callback 
Plugin과 통신 
Plugin을 통해 Message 받음
Plugin to agent 
SecurityGroupAgentRpcApiMixin 
: neutron/agent/securitygroups_rpc.py 
+ security_groups_rule_updated(…) 
+ security_groups_member_updated(…) 
+ security_groups_provider_updated(…) 
OVSNeutronAgent 
: neutron/plugins/각 플러그인의 Agent 
+ network_delete(context, kwargs **) 
+ port_update(context, kwargs **) 
+ tunnel_update(context, kwargs **) 
SecurityGroupAgentRpcCallbackMixin 
: neutron/db/securitygroups_rpc_base.py 
+ security_groups_rule_updated(…) 
+ security_groups_member_updated(…) 
+ security_groups_provider_updated(…) 
TunnelAgentRpcApiMixin 
: neutron/plugins/ml2/drivers/type_tunnel.py 
+ tunnel_update(…) 
AgentNotifierApi 
: neutron/plugins/각 플러그인의 Agent 
아래는 neutron/agent/rpc.py 
+ network_delete(context, network_id) 
+ port_update(context, port, …) 
Ml2Plugin 
notifier 
Plugins L2Agent 
q-agent-notifier-tunnel- 
update_fanout 
Exchange Queue 
q-agent-notifier-port- 
update_fanout 
q-agent-notifier-network-delete_ 
fanout 
q-agent-notifier-security_ 
gtoup-update_ 
fanout 
q-agent-notifier-tunnel- 
update_fanout 
_<uuid> 
q-agent-notifier-port- 
update_fanout 
_<uuid> 
q-agent-notifier-network-delete_ 
fanout 
_<uuid> 
q-agent-notifier-security_ 
gtoup-update_ 
fanout 
_<uuid>
L2 Agent to Plugin 
L2Agent Exchange Queue Plugins 
Ml2Plugin 
RpcCallbacks 
: neutron/plugins/ml2/rpc.py 
+ get_port_from_device(…) 
+ get_device_details(…) 
+ update_device_down(…) 
+ update_device_up(…) 
TunnelAgentRpcApiMixin 
: neutron/plugins/ml2/drivers/type_tunnel.py 
+ security_group_rules_for_devices(…) 
PluginApi 
: neutron/plugins/각 플러그인의 Agent를 통해 제공 
아래는 neutron/agent/rpc.py 
+ get_device_details(…, device, agent_id) 
+ update_device_down(…, agent_id, host=none) 
+ update_device_up(…, agent_id, host=none) 
+ tunnel_sync(…, tunnel_ip, tunnel_type=None) 
OVSNeutronAgent 
: neutron/plugins/각 플러그인의 Agent 
+ network_delete(context, kwargs **) 
+ port_update(context, kwargs **) 
+ tunnel_update(context, kwargs **) 
OVSPluginApi 
: neutron/plugins/각 플러그인의 Agent를 통해 
제공 
plugin_rpc 
Neutron 
q_plugin 
callbacks 
SecurityGroupAgentRpcCallbackMixin 
: neutron/db/securitygroups_rpc_base.py 
+ security_group_rules_for_devices(…) 
TunnelRpcCallbackMixin 
: neutron/plugins/ml2/drivers/type_tunnel.py 
+ tunnel_sync(…)
RPC of DHCP agent DhcpAgent() 
: neutron/agent/dhcp_agent.py 
+ network_create_end(context, payload) 
+ network_update_end(context, payload) 
+ network_delete_end(context, payload) 
+ subnet_update_end(context, payload) 
+ subnet_delete_end(context, payload) 
+ port_update_end(context, payload) 
+ port_delete_end(context, payload) 
DhcpAgentWithStateReport 
: neutron/agent/dhcp_agent.py 
DhcpPluginApi 
: neutron/agent/dhcp_agent.py 
+ get _active_networks_info(…) 
+ get_network_info(network_id) 
+ create_dhcp_port(port) 
+ update_dhcp_port(port_id, port) 
+ release_dhcp_port(network_id, device_id) 
callback 
Plugin_rpc
Neutron to agent 
DhcpAgentNotifyAPI 
: neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py 
+ notify(…, data, methodname) 
Neutron 
Server 
DHCPAgent 
dhcp_agent_fanout 
Exchange Queue 
neutron 
dhcp_agent_fanout 
_<uuid> 
dhcp_agent.<host> 
DhcpAgentWithStateReport 
: neutron/agent/dhcp_agent.py 
DhcpAgent() 
: neutron/agent/dhcp_agent.py 
+ network_create_end(context, payload) 
+ network_update_end(context, payload) 
+ network_delete_end(context, payload) 
+ subnet_update_end(context, payload) 
+ subnet_delete_end(context, payload) 
+ port_update_end(context, payload) 
+ port_delete_end(context, payload) 
‘network.create.end’, 
‘network.update.end’, 
‘network.delete.end’, 
‘subnet.create.end’, 
‘subnet.update.end’, 
‘subnet.delete.end’, 
‘port.create.end’, 
‘port.update.end’, 
‘port.delete.end’
DHCP Agent to Plugin 
DHCPAgent Exchange Queue Plugins 
RpcCallbacks 
: neutron/plugins/ml2/rpc.py 
+ get_port_from_device(…) 
+ get_device_details(…) 
+ update_device_down(…) 
+ update_device_up(…) 
Neutron 
q_plugin 
DhcpPluginApi 
: neutron/agent/dhcp_agent.py 
+ get _active_networks_info(…) 
+ get_network_info(network_id) 
+ create_dhcp_port(port) 
+ update_dhcp_port(port_id, port) 
+ release_dhcp_port(network_id, device_id) 
callbacks DhcpRpcCallbackMixin 
: neutron/db/dhcp_rpc_base.py 
+ get_active_networks_info(…) 
+ get_network_info(…) 
+ release_dhcp_port(…) 
+ create_dhcp_port(…) 
+ update_dhcp_port(…) 
DhcpAgentWithStateReport 
: neutron/agent/dhcp_agent.py 
plugin_rpc 
Ml2Plugin
Nova.conf 
… 
network_api_class=nova.network.neutronv2.api.API 
… 
neutron_url=http://<eth0:IP Address>:9696 
… 
neutron_region_name=RegionOne 
… 
neutron_admin_tenant_name=service 
… 
neutron_auth_strategy=keystone 
… 
neutron_admin_auth_url=http://<eth0:IP Address>:35357/v2.0 
… 
neutron_admin_password=<edit password> 
… 
neutron_admin_username=neutron 
… 
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
interaction to boot VM (OVS bridge) 
_build_instance() on Nova compute 
2. Create port : REST API 
Neutron Server 
Plugin을 통해 Message 받음 
1. _allocate_network() 3. vif_driver.plug() 
4. Add a port tapxxxxxx with external_ids set 
ovs bridge br-int 
5. Find a port tapxxxxxx was added 
Neutron openvswitch agent 
(Loop to detect port update on br-int) 
6. Get the Neutron port id from the external_ids 
8. Set up the ovs port so that the network of VM works 
7. get_device_details(port_id) 
Message queue 
9. update_device_up()
Overview & Features 
Neutron 
ml2 plugin 
OpenIRIS ml2 Manager 
BW QoS/ToS 
Link Cost Manager 
OVS-Plugin ARP Proxy E2E Path Visualizer 
Core Module 
Topology Manager Forwarding Manager … MAC Learning Status Manager Switch Manager 
OpenIRIS - pNaaS 
Tunnel Manager 
VNID-to-Flow 
Mapper 
Virtual Routing 
Manager 
Policy Manager 
ECMP 
Flow Monitor 
Queuing Path Computation 
OF Switch 
OF Switch OF Switch 
OF Switch 
…
Architecture 
BW Link Cost Manager 
OVS-Plugin ARP Proxy E2E Path Visualizer 
OpenvSwitch 
VM1 VM2 
Compute Node 
Network Node 
Control Node 
OpenvSwitch 
VM1 VM2 
QoS/ToS 
REST API REST API 
OF Switch 
OF Switch OF Switch 
OF Switch 
Compute Node 
OpenvSwitch 
VM1 VM2 
Compute Node 
OpenIRIS - pNaaS 
Tunnel Manager 
VNID-to-Flow 
Mapper 
Virtual Routing 
Manager 
Neutron API 
Policy Manager 
ECMP 
Flow Monitor 
Queuing Path Computation
Overview & Features 
Overview 
• Using REST API 
Features 
• Network (http://<IRIS IP:8080>/vm/ml2/networks/{uuid}) 
 create_network_postcommit 
 update_network_postcommit 
 delete_network_postcommit 
• Subnet (http://<IRIS IP:8080>/vm/ml2/subnets/{uuid}) 
 create_subnet_postcommit 
 update_subnet_postcommit 
 delete_subnet_postcommit 
• Port (http://<IRIS IP:8080>/vm/ml2/ports/{uuid}) 
 create_port_postcommit 
 update_port_postcommit 
 delete_port_postcommit
Create Network / Subnet
Create Network / Subnet
Create Network / Subnet 
REST Call 
• Get : http://IP:8080/controller/nb/v2/neutron/networks/af57c272-fe28-4a1d-a5e0-48b42508f1ea
Create Network / Subnet 
REST Call 
• Get : http://IP:8080/controller/nb/v2/neutron/subnets/d07c4855-f728-415d-b841-c62086a1ca0e
Create vm
Create vm
Create vm 
REST Call 
• Get : http://IP:8080/controller/nb/v2/neutron/ports/8f59e83c-7dd9-4c8d-b642-67da44b00e30
Create vm 
REST Call 
• Get : http://IP:8080/controller/nb/v2/neutron/ports/90a6dfc6-3f72-4aa9-9c99-1c1b8bbd2eac
Install 
Network Node 
• service neutron-server stop 
• service neutron-openvswitch-agent stop 
• Download OpenIRIS ml2 mechanism Driver 
 /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers 
 /usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers 
• Edit file 
 /etc/neutron/plugins/ml2/ml2_conf.ini 
 [openiris] 
 [ml2_openiris] 
• service neutron-server start 
DevStack 
• Github 
 DevStack : https://github.com/uni2u/DevStack.git (Find bugs...) 
 TBD 
 Neutron(ml2 plugin) : https://github.com/uni2u/Neutron.git (Find bugs...) 
 TBD 
• We need Stable Version 
 Screenshot : ubuntu 12.04 / 14.04, Fedora, etc
Todo 
DevStack 
• Provide IRIS ml2 plugin in devstack (OpenStack Project) 
 mechanism_iris, … 
 we need devstack! 
• Script Files 
 More easy install devstack
Todo
Todo
Todo 
/opt/stack/neutron/setup.cfg
Todo
Overview & Features 
Overview 
• OpenIRIS ML2 Module 
 Download Git : https://github.com/bjlee72/IRIS.git 
• Now 
 TBD
Architecture 
BW Link Cost Manager 
OVS-Plugin ARP Proxy E2E Path Visualizer 
OpenvSwitch 
VM1 VM2 
Compute Node 
Network Node 
Control Node 
OpenvSwitch 
VM1 VM2 
QoS/ToS 
REST API REST API 
OF Switch 
OF Switch OF Switch 
OF Switch 
Compute Node 
OpenvSwitch 
VM1 VM2 
Compute Node 
OpenIRIS - pNaaS 
Tunnel Manager 
VNID-to-Flow 
Mapper 
Virtual Routing 
Manager 
Neutron API 
Policy Manager 
ECMP 
Flow Monitor 
Queuing Path Computation
Overview & Features 
Features (ml2 classes) 
• IOpenstackML2ConnectorServie.java 
 Interface of ML2_Module (OFMOpenstackML2Connector.java) 
 Incomplete (interface is nothing) 
• NetworkConfiguration.java 
 ml2 plugin called this class 
 REST (http://IP:8080/vm/ml2) 
• OFMOpenstackML2Connector.java 
 Module class 
• RestCreateNetwork.java 
 create_network_posecommit (ml2 plugin) 
 REST (http://IP:8080/vm/ml2/networks/{uuid}) 
 Incomplete (PUT, POST, DELETE) 
• RestCreatePort.java 
 create_port_posecommit (ml2 plugin) 
 REST (http://IP:8080/vm/ml2/ports/{uuid}) 
 Incomplete (PUT, POST, DELETE) 
• RestCreateSubnet.java 
 create_subnet_posecommit (ml2 plugin) 
 REST (http://IP:8080/vm/ml2/subnets/{uuid}) 
 Incomplete (PUT, POST, DELETE)
준비사항 및 실습 
준비사항 
• VirtualBox ver 4.3.12 (https://www.virtualbox.org/wiki/Downloads) 
• Ubuntu 14.04 LTS (http://www.ubuntu.com/download/desktop) 
VirtualBox 설정
Virtualbox VM Create – Control Node
Virtualbox VM Create – Control Node
Virtualbox VM Start – Control Node
Control Node 설정 
Installs 
• Services deployed 
 Compute(Nova) / Network(Neutron) / Object Storage(Swift) / Image Storage (Glance) / Block 
Storage(Cinder) / Identity(Keystone) / Database(Trove) / Orchestration(Heat) / 
Dashboard(Horizon) 
• Installation Order 
 System Update, Upgrade 
sudo apt-get update 
sudo apt-get upgrade 
sudo apt-get dist-upgrade 
 Install git, vim 
sudo apt-get install git vim 
 User Permission 
sudo adduser stack 
echo “stack ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers 
 Download Devstack (ver. Icehouse) 
git clone https://github.com/openstack-dev/devstack.git -b stable/icehouse devstack/

Harmonia open iris_basic_v0.1

  • 2.
    1 2 3 4 Introduce Architecture OpenStack IRIS Neutron Plugin 5 6 7 IRIS ML2 Mechanism Driver IRIS Virtual Network Module Todo
  • 4.
    Introduce Harmonia •코드네임 : 하르모니아 (Harmonia)  개발 코드 네임  정식 명칭 : IRIS-pNaaS • Harmonia Logo • Harmonia ?  전쟁의 신(아레스)과 미와 사랑의 여신(아프로디테) 사이에서 태어난 그리스 여신으 로 ‘조화’를 의미함  음악 용어인 하모니(Harmony)의 어원  Virtual Network의 조화를 이루고자 하는 의미
  • 5.
    Introduce Keywords :SIA (Swift, Inexpensive, Automation) • Swift • Inexpensive • Automation We can create virtual networks, make swift networks and low-price. Don’t worry about! It’s automation.
  • 7.
    개념도 VM2 VM3 VM5 VM6 VM1 Tenant A VM4 Tenant B REST API VM2 VM4 VM5 Compute Node Compute Node Network Node Control Node Compute Node Compute Node VM1 VM3 VM6 OpenFlow
  • 8.
    OpenStack Concept Architecture http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_overview.html
  • 9.
    Control Node, NetworkNode, Compute Node http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_overview.html SDN Controller
  • 10.
    구성도 (서버 랙) eth0 eth1 eth2 eth2 eth0 eth0 eth1 eth1 eth0 Data Network Management Network External Network OF Switch eth1 Control Node Network Node Compute Node OpenFlow
  • 11.
    구성도 (서버 랙- 실사) Control Node Management Network Hub - OS : Ubuntu 14.04 , Fedora, etc… - OVS : version 2.0 이상 * 주의 : Linux Kernel 과 dependency - 네트워크 인터페이스 : 2 * eth0 : Management Network (사설 IP) * eth1 : 외부망 연동 (공인 IP) Network Node - OS : Ubuntu 14.04 , Fedora, etc… - OVS : version 2.0 이상 * 주의 : Linux Kernel 과 dependency - 네트워크 인터페이스 : 3 이상 * eth0 : Management Network (사설 IP) * eth1 : Data Network (사설 IP) * eth2 : 외부망 연동 (공인 IP) Compute Node - OS : Ubuntu 14.04 , Fedora, etc… - OVS : version 2.0 이상 * 주의 : Linux Kernel 과 dependency - 네트워크 인터페이스 : 3 이상 * eth0 : Management Network (사설 IP) * eth1 : Data Network (사설 IP) * eth2 : 외부망 연동 (공인 IP) ① ② ③ ① ② ③ Control Node Network Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node OpenFlow Hub Switch Management Network (일반 Hub) Data Network (OpenFlow SW) + SDN Controller Data Network Switch (OpenFlow SW) - OF Switch, OVS, OpenWRT, … - Connect SDN Controller
  • 13.
    Overview & Features Overview • A virtual switch or Virtual Ethernet Bridge (VEB) • A key component of networking for virtualized computing • User-space : configuration, control • Kernel-space : datapath (include in main Linux kernel since version 3.3) • Cisco Nexus 1000v, VMware vDS, IBM DVS 5000v, MS Hyper-V vSwitch Features • Visibility into inter-VM communication via NetFlow, sFlow®, IPFIX, SPAN, LACP (IEEE 802.1AX-2008) • Standard 802.1Q VLAN model with trunking • STP (IEEE 802.1D-1998), Fine-grained QoS control • NIC bonding with source-MAC load balancing, active backup, and L4 hashing • OpenFlow protocol support (including many extensions for virtualization) • Multiple tunneling protocols (VXLAN, Ethernet over GRE, CAPWAP, Ipsec, GRE over Ipsec)
  • 14.
    Open vSwitch Architecture ovs-vsctl ovsdb-client ovs-appctl ovs-dpctl ovs-brcompatd ovs-vswitchd brcompat.ko openvswitch.ko Kernel Datapath (Fast Path) Kernel space user space ovsdb-server Netlink tap Remote Open vSwitch db OpenFlow Controller ovs-ofctl VM vNIC OVS Management (JSON RPC) OpenFlow
  • 15.
    Open vSwitch Architecture br-ovs VM VM vNIC vNIC vnet0 tap2 vnet1 Packet flows eth0 tap1 Port Flow Table Bridge Interface bond0 eth2 eth1 eth2
  • 16.
    Open vSwitch Architecture ovs-vswitchd • a daemon that implements the switch, along with a companion Linux kernel module for flow-based switching ovsdb-server • a lightweight database server that ovs-vswitchd queries to obtain configuration ovs-vsctl • a utility for querying and updating the configuration of ovs-vswitchd ovs-dpctl • a tool for configuring and monitoring the switch kernel module ovs-appctl • a utility that sends commands to running Open vSwitch daemons (ovs-vswitchd) ovs-controller • a simple OpenFlow controller reference implementation brocompat.ko • Linux bridge compatibility module openvswitch.ko • Open vSwitch switching datapath
  • 17.
    Open vSwitch Configuration Table Purpose Open_vSwitch Open vSwitch configuration Bridge Bridge configuration Port Port configuration Interface One physical network device in a Port QoS Quality of Service configuration Queue QoS output queue Mirror Port mirroring Controller OpenFlow controller configuration Manager OVSDB management connection NetFlow NetFlow configuration SSL SSL configuration sFlow sFlow configuration Capability Capability configuration
  • 18.
    Open vSwitch Configurationsample $ sudo ovs-vsctl show 225d73cc-15b3-4db5-9b45-e783f7c49a10 Bridge br-tun Port "gre-3" Interface "gre-3" type: gre options: {in_key=flow, out_key=flow, remote_ip="192.168.0.10"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int Port "tap1" tag: 1 Interface "tap1" Port "tap2" tag: 1 Interface "tap2" Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int}
  • 19.
    Open vSwitch Configurationsample Linux Networking Stack VM VM vNIC vNIC br-int Eth0 tap1 External IP gre3 Eth2 192.168.10.20 Eth1 192.168.0.20 tap2 br-tun patch-tun patch-int GRE Tunnel 192.168.0.10
  • 20.
    Open vSwitch Demo Switch eth0 VM VM vNIC vNIC tap1 tap2 OpenvSwitch Bridge eth1 VM VM vNIC vNIC gre-1 OpenvSwitch Bridge Switch tap1 eth1 tap2 eth0 GRE tunnel gre-1 External network Tunneling network 192.168.0.0/24
  • 21.
    Virtual Network -OpenStack Control Node Nova Keystone Glance Horizon eth1 eth0 External network eth2 eth0 Network Node Neutron Server Neutron L3-Agent eth1 Management network 192.168.0.0/24 eth2 Compute Node 01 eth1 eth0 eth2 Compute Node 02 eth1 eth0 eth2 Neutron Agent Neutron OpenvSwitch Plug-in Nova Compute Neutron Agent Neutron OpenvSwitch Plug-in Nova Compute Data network 192.168.10.0/24
  • 22.
    eth0 Network Node br-ex qg~~ br-int qr~~ eth1 gre-1 eth0 Compute Node 01 tap1 eth1 tap2 VM VM GRE tunnel gre-1 Virtual Network - OpenStack tap~~ br-tun gre-2 gre-2 br-tun br-int Tunnel <-> Compute Node 02 • qg~~ : external gateway interface • qr~~ : virtual router interface • tap~~ : network service interface (DHCP, DNS, …)
  • 24.
    Overview & Features Overview • A virtual switch or Virtual Ethernet Bridge (VEB) • A key component of networking for virtualized computing • User-space : configuration, control • Kernel-space : datapath (include in main Linux kernel since version 3.3) • Cisco Nexus 1000v, VMware vDS, IBM DVS 5000v, MS Hyper-V vSwitch Features • Visibility into inter-VM communication via NetFlow, sFlow®, IPFIX, SPAN, LACP (IEEE 802.1AX-2008) • Standard 802.1Q VLAN model with trunking • STP (IEEE 802.1D-1998), Fine-grained QoS control • NIC bonding with source-MAC load balancing, active backup, and L4 hashing • OpenFlow protocol support (including many extensions for virtualization) • Multiple tunneling protocols (VXLAN, Ethernet over GRE, CAPWAP, Ipsec, GRE over Ipsec)
  • 25.
    Layer Diagram CoreREST API Extension A REST API Extension … REST API Extension N REST API Auth N / Auth Z / Input Validation/Output view Core Plugin Interface Service A Plugin Interface Service … Plugin Interface Service N Plugin Interface Core Plugin (Vendor specific) Service A Plugin Service N Plugin Agents
  • 26.
    Application and filters [composite:neutron] use = egg:Paste#urlmap /: neutronversions /v2.0: neutronapi_v2_0 [composite:neutronapi_v2_0] use = call:neutron.auth:pipeline_factory keystone = authtoken keystonecontext extensions neutronapiapp_v2_0 [filter:keystonecontext] paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory [filter:extensions] paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory [app:neutronversions] paste.app_factory = neutron.api.versions:Versions.factory [app:neutronapiapp_v2_0] paste.app_factory = neutron.api.v2.router:APIRouter.factory
  • 27.
    neutron/server/__init__.py: main() config.parse(sys.argv[1:])# --config-file neutron.conf --config-file XXXXX.ini neutron/common/config.py def load_paste_app(app_name) # Name of the application to load ex) def load_paste_app(“neutron”) • neutron/auth.py def pipeline_factory(loader, global_conf, **local_conf): • neutron/api/v2/router.py class APIRouter(wsgi.router): def factory(cls, global_config, **local_config): • neutron/api/extensions.py def plugin_aware_extension_middleware_factory(global_config, **local_config): neutron/auth.py class NeutronKeystoneContext(wsgi.Middleware):
  • 28.
    pipeline URL request authtoken keystonecontext extensions URL is declared here? Process Response neutronapiapp_v2_0 URL is declared here? Process No No, return HTTPNotFound
  • 29.
    neutron/api/v2/router.py : APIRouter.factory() 1. __init__() 1.1 plugin = manager.NeutronManager.get_plugin( ) 1.1.1 neutron/manager.py : __init__( ) 1.1.1.1 def _create_instance( ) #create core plugin instance 1.1.1.2 def _load_service_plugins( ) #load plugin service load plugins neutron/neutron.conf service_plugins = … core_plugin = ml2 NeutronManager : service_plugins = {“CORE”: ml2, “LOADBALANCER”: xxx, …}
  • 30.
    What are plugins& extensions extensions are about resources and the actions on them • neutron/plugins/cisco|vmware|nuage/extensions/xxx.py @classmethod def get_resources(cls): for resource_name in [‘router’, ‘floatingip’]: … controller = base.create_resource (collection_name, resource_name, plugin…) ex = ResourceExtension(collection_name, controller, member_actions…) Plugins are used to support the resources • neutron/services/l3_router/l3_router_plugin.py • neutron/plugins/bigswitch/plugin.py supported_extension_aliases = [“router”, “ext-gw-mode”, “extraroute”, “l3_agent_scheduler”] • neutron/extensions/l3.py • neutron/plugins/bigswitch/plugin.py def update_router(self, context, id, router): • neutron/extensions/l3.py • neutron/plugins/bigswitch/routerrule_db.py def get_router(self, context, id, fields=None):
  • 31.
    neutronapiapp_v2_0: load extensions neutron/api/v2/router.py: APIRouter.factory() • __init__( ) 1.1 plugin = manager.NeutronManager.get_plugin() 1.2 ext_mgr = extensions.PluginAwareExtensionManager.get_instance() 1.2.1 neutron/api/extensions.py : def get_extensions_path() 1.2.2 neutron/api/extensions.py : class PluginAwareExtensionManager(ExtensionManager): __init__(paths, plugins) 1.2.2.1 neutron/api/extensions.py : def _load_all_extensions(self): self._load_all_extensions_from_path(path) 1.2.2.2 neutron/api/extensions.py : def _load_all_extensions(self, path): … self.add_extension(new_ext) 1.2.2.3 neutron/api/extensions.py : def add_extension(self, ext): … self._check_extension(ext): neutron standard extension plus ones specified by api_extension_path= in neutron.conf check each python module name under the path, and capitalize the first letter of the module name to find the class in it, excluding the modules starting with “_”. 1. 각 플러그인 마다 체크 (supported_extension_aliases) 2. check if the potential extension has implemented the needed functions. 3. check if one of plugins supports it. plugin’s supported_extension_aliases attribute defines what extensions it supports.
  • 32.
    neutronapiapp_v2_0: install coreresource neutron/api/v2/router.py: APIRouter.factory() • __init__( ) 1.1 plugin = manager.NeutronManager.get_plugin() 1.2 ext_mgr = extensions.PluginAwareExtensionManager.get_instance() 1.3 install core resources 1.3.1 neutron/api/v2/router.py RESOURCES = {‘network’: ‘networks’, ‘subnet’: ‘subnets’, ‘port’: ‘ports’}
  • 33.
    extension filter: assembleextensions neutron/api/extension.py • def plugin_aware_extension_middleware_factory(global_config, **local_config) 1.1 def _factory(app): ext_mgr = PluginAwareExtensionManager.get_instance() return ExtensionMiddleware(app, ext_mgr=ext_mgr) :ExtensionMiddleware :PluginAwareExtensionManager :ExtensionDescriptor 1. __init__(application, ext_mgr) 1.1 get_resource() [for each extension] 1.1.1 get_resources() Loop 1.2 install route objects
  • 34.
    URL processing (1/2) Resource:Resource :TextDeserializer :DictSerializer :Control Node 1: HTTP URL 1.1: __init__ 1.2: deserialize (data string) 1.3: getattr (action) 1.4: create | update | show | index | delete 1.5: serialize (data)
  • 35.
    URL processing (2/2) :Control Node 1.4: create | update | show | index | delete plugin:Plugin 1.4.1: calculate Plugin handler (action) 1.4.2: authz/input validation 1.4.3: (handler_fun} 1.4.4: _send_dhcp_notification (context, data, methodname) 1.4.5: _view_(context, data, fields_to_strip) Notification to ceilometer also happens here Action is link create, update, show, index or delete Handler_fun is like create_net, list_nets function of plugins
  • 37.
    Setup.cfg <ml2 Setup> neutron.ml2.type_drivers = flat = neutron.plugins.ml2.drivers.type_flat:FlatTypeDriver local = neutron.plugins.ml2.drivers.type_local:LocalTypeDriver vlan = neutron.plugins.ml2.drivers.type_vlan:VlanTypeDriver gre = neutron.plugins.ml2.drivers.type_gre:GreTypeDriver vxlan = neutron.plugins.ml2.drivers.type_vxlan:VxlanTypeDriver neutron.ml2.mechanism_drivers = linuxbridge = neutron.plugins.ml2.drivers.mech_linuxbridge:LinuxbridgeMechanismDriver openvswitch = neutron.plugins.ml2.drivers.mech_openvswitch:OpenvswitchMechanismDriver hyperv = neutron.plugins.ml2.drivers.mech_hyperv:HypervMechanismDriver ncs = neutron.plugins.ml2.drivers.mechanism_ncs:NCSMechanismDriver arista = neutron.plugins.ml2.drivers.mech_arista.mechanism_arista:AristaDriver cisco_nexus = neutron.plugins.ml2.drivers.cisco.mech_cisco_nexus:CiscoNexusMechanismDriver l2population = neutron.plugins.ml2.drivers.l2pop.mech_driver:L2populationMechanismDriver …
  • 38.
    ml2.ini <ml2 설정파일> neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/ml2.ini [ml2] type_drivers = local,flat,vlan,gre,vxlan mechanism_drivers = openvswitch,linuxbridge tenant_network_types = vlan,gre,vxlan [ml2_type_flat] flat_networks = physnet1,physnet2 [ml2_type_vlan] network_vlan_ranges = physnet1:1000:2999,physnet2 [ml2_type_gre] tunnel_id_ranges = 1:1000 [ml2_type_vxlan] vni_ranges = 1001:2000
  • 39.
    __init__ : neutronmanager (server) neutron/manager.py: __init__() • Create core plugin instance [core_plugin=] Ml2 plugin :TypeManager :TypeDriver :MechanismManager :MechanismDriver 1: __init__() 1.1: initialize() loop [loop on drivers] 1.1.1: initialize() [loop on ordered_mech_drivers] 1.2.1: initialize() loop 1.2: initialize() 1.3: _setup_rpc() ml2.ini를 통하여 어떠한 드라이버를 사용할 것인지 읽고 환경을 설정함
  • 40.
    Ml2 RPC structure SecurityGroupServerRpcCallbackMixin() : neutron/db/securitygroups_rpc_base.py DhcpRpcCallbackMixin() : neutron/db/dhcp_rpc_base.py TunnelRpcCallbackMixin() : neutron/plugins/ml2/drivers/type_tunnel.py RpcCallbacks : neutron/plugins/ml2/rpc.py AgentNotifierApi() : neutron/plugins/각 플러그인 마다 구현 Ml2Plugin TunnelAgentRpcApiMixin : neutron/SecurityGroupAgentRpcApiMixin plugins/ml2/drivers/type_tunnel.py : neutron/agent/securitygroups_rpc.py callbacks notifier DHCP Agent에서 RPC 처리 L2 Agent 에서 Notifi
  • 41.
    RPC of L2agent: ovs neutron agent SecurityGroupAgentRpcApiMixin : neutron/agent/securitygroups_rpc.py + security_groups_rule_updated(context, kwargs **) + security_groups_member_updated(context, kwargs **) + security_groups_provider_updated(context, kwargs **) OVSNeutronAgent : neutron/plugins/각 플러그인의 Agent + network_delete(context, kwargs **) + port_update(context, kwargs **) + tunnel_update(context, kwargs **) OVSPluginApi : neutron/plugins/각 플러그인의 Agent를 통해 제공 PluginApi : neutron/plugins/각 플러그인의 Agent를 통해 제공 아래는 neutron/agent/rpc.py + get_device_details(…, device, agent_id) + update_device_down(…, agent_id, host=none) + update_device_up(…, agent_id, host=none) + tunnel_sync(…, tunnel_ip, tunnel_type=None) SecurityGroupServerRpcApiMixin : neutron/db/securitygroups_rpc_base.py + security_group_rules_for_devices(…) plugin_rpc callback Plugin과 통신 Plugin을 통해 Message 받음
  • 42.
    Plugin to agent SecurityGroupAgentRpcApiMixin : neutron/agent/securitygroups_rpc.py + security_groups_rule_updated(…) + security_groups_member_updated(…) + security_groups_provider_updated(…) OVSNeutronAgent : neutron/plugins/각 플러그인의 Agent + network_delete(context, kwargs **) + port_update(context, kwargs **) + tunnel_update(context, kwargs **) SecurityGroupAgentRpcCallbackMixin : neutron/db/securitygroups_rpc_base.py + security_groups_rule_updated(…) + security_groups_member_updated(…) + security_groups_provider_updated(…) TunnelAgentRpcApiMixin : neutron/plugins/ml2/drivers/type_tunnel.py + tunnel_update(…) AgentNotifierApi : neutron/plugins/각 플러그인의 Agent 아래는 neutron/agent/rpc.py + network_delete(context, network_id) + port_update(context, port, …) Ml2Plugin notifier Plugins L2Agent q-agent-notifier-tunnel- update_fanout Exchange Queue q-agent-notifier-port- update_fanout q-agent-notifier-network-delete_ fanout q-agent-notifier-security_ gtoup-update_ fanout q-agent-notifier-tunnel- update_fanout _<uuid> q-agent-notifier-port- update_fanout _<uuid> q-agent-notifier-network-delete_ fanout _<uuid> q-agent-notifier-security_ gtoup-update_ fanout _<uuid>
  • 43.
    L2 Agent toPlugin L2Agent Exchange Queue Plugins Ml2Plugin RpcCallbacks : neutron/plugins/ml2/rpc.py + get_port_from_device(…) + get_device_details(…) + update_device_down(…) + update_device_up(…) TunnelAgentRpcApiMixin : neutron/plugins/ml2/drivers/type_tunnel.py + security_group_rules_for_devices(…) PluginApi : neutron/plugins/각 플러그인의 Agent를 통해 제공 아래는 neutron/agent/rpc.py + get_device_details(…, device, agent_id) + update_device_down(…, agent_id, host=none) + update_device_up(…, agent_id, host=none) + tunnel_sync(…, tunnel_ip, tunnel_type=None) OVSNeutronAgent : neutron/plugins/각 플러그인의 Agent + network_delete(context, kwargs **) + port_update(context, kwargs **) + tunnel_update(context, kwargs **) OVSPluginApi : neutron/plugins/각 플러그인의 Agent를 통해 제공 plugin_rpc Neutron q_plugin callbacks SecurityGroupAgentRpcCallbackMixin : neutron/db/securitygroups_rpc_base.py + security_group_rules_for_devices(…) TunnelRpcCallbackMixin : neutron/plugins/ml2/drivers/type_tunnel.py + tunnel_sync(…)
  • 44.
    RPC of DHCPagent DhcpAgent() : neutron/agent/dhcp_agent.py + network_create_end(context, payload) + network_update_end(context, payload) + network_delete_end(context, payload) + subnet_update_end(context, payload) + subnet_delete_end(context, payload) + port_update_end(context, payload) + port_delete_end(context, payload) DhcpAgentWithStateReport : neutron/agent/dhcp_agent.py DhcpPluginApi : neutron/agent/dhcp_agent.py + get _active_networks_info(…) + get_network_info(network_id) + create_dhcp_port(port) + update_dhcp_port(port_id, port) + release_dhcp_port(network_id, device_id) callback Plugin_rpc
  • 45.
    Neutron to agent DhcpAgentNotifyAPI : neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py + notify(…, data, methodname) Neutron Server DHCPAgent dhcp_agent_fanout Exchange Queue neutron dhcp_agent_fanout _<uuid> dhcp_agent.<host> DhcpAgentWithStateReport : neutron/agent/dhcp_agent.py DhcpAgent() : neutron/agent/dhcp_agent.py + network_create_end(context, payload) + network_update_end(context, payload) + network_delete_end(context, payload) + subnet_update_end(context, payload) + subnet_delete_end(context, payload) + port_update_end(context, payload) + port_delete_end(context, payload) ‘network.create.end’, ‘network.update.end’, ‘network.delete.end’, ‘subnet.create.end’, ‘subnet.update.end’, ‘subnet.delete.end’, ‘port.create.end’, ‘port.update.end’, ‘port.delete.end’
  • 46.
    DHCP Agent toPlugin DHCPAgent Exchange Queue Plugins RpcCallbacks : neutron/plugins/ml2/rpc.py + get_port_from_device(…) + get_device_details(…) + update_device_down(…) + update_device_up(…) Neutron q_plugin DhcpPluginApi : neutron/agent/dhcp_agent.py + get _active_networks_info(…) + get_network_info(network_id) + create_dhcp_port(port) + update_dhcp_port(port_id, port) + release_dhcp_port(network_id, device_id) callbacks DhcpRpcCallbackMixin : neutron/db/dhcp_rpc_base.py + get_active_networks_info(…) + get_network_info(…) + release_dhcp_port(…) + create_dhcp_port(…) + update_dhcp_port(…) DhcpAgentWithStateReport : neutron/agent/dhcp_agent.py plugin_rpc Ml2Plugin
  • 48.
    Nova.conf … network_api_class=nova.network.neutronv2.api.API … neutron_url=http://<eth0:IP Address>:9696 … neutron_region_name=RegionOne … neutron_admin_tenant_name=service … neutron_auth_strategy=keystone … neutron_admin_auth_url=http://<eth0:IP Address>:35357/v2.0 … neutron_admin_password=<edit password> … neutron_admin_username=neutron … libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  • 49.
    interaction to bootVM (OVS bridge) _build_instance() on Nova compute 2. Create port : REST API Neutron Server Plugin을 통해 Message 받음 1. _allocate_network() 3. vif_driver.plug() 4. Add a port tapxxxxxx with external_ids set ovs bridge br-int 5. Find a port tapxxxxxx was added Neutron openvswitch agent (Loop to detect port update on br-int) 6. Get the Neutron port id from the external_ids 8. Set up the ovs port so that the network of VM works 7. get_device_details(port_id) Message queue 9. update_device_up()
  • 51.
    Overview & Features Neutron ml2 plugin OpenIRIS ml2 Manager BW QoS/ToS Link Cost Manager OVS-Plugin ARP Proxy E2E Path Visualizer Core Module Topology Manager Forwarding Manager … MAC Learning Status Manager Switch Manager OpenIRIS - pNaaS Tunnel Manager VNID-to-Flow Mapper Virtual Routing Manager Policy Manager ECMP Flow Monitor Queuing Path Computation OF Switch OF Switch OF Switch OF Switch …
  • 52.
    Architecture BW LinkCost Manager OVS-Plugin ARP Proxy E2E Path Visualizer OpenvSwitch VM1 VM2 Compute Node Network Node Control Node OpenvSwitch VM1 VM2 QoS/ToS REST API REST API OF Switch OF Switch OF Switch OF Switch Compute Node OpenvSwitch VM1 VM2 Compute Node OpenIRIS - pNaaS Tunnel Manager VNID-to-Flow Mapper Virtual Routing Manager Neutron API Policy Manager ECMP Flow Monitor Queuing Path Computation
  • 53.
    Overview & Features Overview • Using REST API Features • Network (http://<IRIS IP:8080>/vm/ml2/networks/{uuid})  create_network_postcommit  update_network_postcommit  delete_network_postcommit • Subnet (http://<IRIS IP:8080>/vm/ml2/subnets/{uuid})  create_subnet_postcommit  update_subnet_postcommit  delete_subnet_postcommit • Port (http://<IRIS IP:8080>/vm/ml2/ports/{uuid})  create_port_postcommit  update_port_postcommit  delete_port_postcommit
  • 54.
  • 55.
  • 56.
    Create Network /Subnet REST Call • Get : http://IP:8080/controller/nb/v2/neutron/networks/af57c272-fe28-4a1d-a5e0-48b42508f1ea
  • 57.
    Create Network /Subnet REST Call • Get : http://IP:8080/controller/nb/v2/neutron/subnets/d07c4855-f728-415d-b841-c62086a1ca0e
  • 58.
  • 59.
  • 60.
    Create vm RESTCall • Get : http://IP:8080/controller/nb/v2/neutron/ports/8f59e83c-7dd9-4c8d-b642-67da44b00e30
  • 61.
    Create vm RESTCall • Get : http://IP:8080/controller/nb/v2/neutron/ports/90a6dfc6-3f72-4aa9-9c99-1c1b8bbd2eac
  • 62.
    Install Network Node • service neutron-server stop • service neutron-openvswitch-agent stop • Download OpenIRIS ml2 mechanism Driver  /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers  /usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers • Edit file  /etc/neutron/plugins/ml2/ml2_conf.ini  [openiris]  [ml2_openiris] • service neutron-server start DevStack • Github  DevStack : https://github.com/uni2u/DevStack.git (Find bugs...)  TBD  Neutron(ml2 plugin) : https://github.com/uni2u/Neutron.git (Find bugs...)  TBD • We need Stable Version  Screenshot : ubuntu 12.04 / 14.04, Fedora, etc
  • 63.
    Todo DevStack •Provide IRIS ml2 plugin in devstack (OpenStack Project)  mechanism_iris, …  we need devstack! • Script Files  More easy install devstack
  • 64.
  • 65.
  • 66.
  • 67.
  • 69.
    Overview & Features Overview • OpenIRIS ML2 Module  Download Git : https://github.com/bjlee72/IRIS.git • Now  TBD
  • 70.
    Architecture BW LinkCost Manager OVS-Plugin ARP Proxy E2E Path Visualizer OpenvSwitch VM1 VM2 Compute Node Network Node Control Node OpenvSwitch VM1 VM2 QoS/ToS REST API REST API OF Switch OF Switch OF Switch OF Switch Compute Node OpenvSwitch VM1 VM2 Compute Node OpenIRIS - pNaaS Tunnel Manager VNID-to-Flow Mapper Virtual Routing Manager Neutron API Policy Manager ECMP Flow Monitor Queuing Path Computation
  • 71.
    Overview & Features Features (ml2 classes) • IOpenstackML2ConnectorServie.java  Interface of ML2_Module (OFMOpenstackML2Connector.java)  Incomplete (interface is nothing) • NetworkConfiguration.java  ml2 plugin called this class  REST (http://IP:8080/vm/ml2) • OFMOpenstackML2Connector.java  Module class • RestCreateNetwork.java  create_network_posecommit (ml2 plugin)  REST (http://IP:8080/vm/ml2/networks/{uuid})  Incomplete (PUT, POST, DELETE) • RestCreatePort.java  create_port_posecommit (ml2 plugin)  REST (http://IP:8080/vm/ml2/ports/{uuid})  Incomplete (PUT, POST, DELETE) • RestCreateSubnet.java  create_subnet_posecommit (ml2 plugin)  REST (http://IP:8080/vm/ml2/subnets/{uuid})  Incomplete (PUT, POST, DELETE)
  • 73.
    준비사항 및 실습 준비사항 • VirtualBox ver 4.3.12 (https://www.virtualbox.org/wiki/Downloads) • Ubuntu 14.04 LTS (http://www.ubuntu.com/download/desktop) VirtualBox 설정
  • 74.
    Virtualbox VM Create– Control Node
  • 75.
    Virtualbox VM Create– Control Node
  • 76.
    Virtualbox VM Start– Control Node
  • 77.
    Control Node 설정 Installs • Services deployed  Compute(Nova) / Network(Neutron) / Object Storage(Swift) / Image Storage (Glance) / Block Storage(Cinder) / Identity(Keystone) / Database(Trove) / Orchestration(Heat) / Dashboard(Horizon) • Installation Order  System Update, Upgrade sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade  Install git, vim sudo apt-get install git vim  User Permission sudo adduser stack echo “stack ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers  Download Devstack (ver. Icehouse) git clone https://github.com/openstack-dev/devstack.git -b stable/icehouse devstack/