KEMBAR78
API and Data Format Essentials | PDF | Hypertext Transfer Protocol | Soap
0% found this document useful (0 votes)
559 views126 pages

API and Data Format Essentials

The document discusses various data formats used for APIs, including XML, JSON, and YAML. It explains their syntaxes and common uses. XML is best for structured data exchange between applications. JSON is commonly used for server-web communication due to its smaller size and readability. YAML is human-readable and uses indentation to define structure. The document also provides an overview of HTTP and URL components used for network-based APIs.

Uploaded by

Uriel Montero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
559 views126 pages

API and Data Format Essentials

The document discusses various data formats used for APIs, including XML, JSON, and YAML. It explains their syntaxes and common uses. XML is best for structured data exchange between applications. JSON is commonly used for server-web communication due to its smaller size and readability. YAML is human-readable and uses indentation to define structure. The document also provides an overview of HTTP and URL components used for network-based APIs.

Uploaded by

Uriel Montero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 126

Cisco DevNet Associate

In Software development you can observe these trends:


● Web applications
● Proliferation of Mobile applications: Often providing an alternative, seamless access to
Web Applications
● Integration with social media: Apply existing functionalities of social platforms
● Cloud services: for data sharing and processing
● Free software and libraries: To save cost in applications and services rather than
implementing everything from scratch

APIs separate functionality into building blocks, the details of communication between different
parts of an application or service are specified by the Application Programming Interface (API)
APIs allow faster prototyping and development of software by enabling communication between
computer systems or programs by specifying the details of how the information is exchanged,
thereby facilitating code functionality and reuse

Using APIs
To use APIs keep in mind these considerations:
● Modular software design
● Prototyping and testing API integration
● Challenges in consuming networked APIs
● Distributed computing patterns

Prototyping and testing API actions allow us to verify the feasibility of a chosen design.
The main purpose of APIs is to expose functionality, so documentation is at least as important
as implementation. Developers rely on API documentation to provide information such as:
● Which functions or endpoints to call
● Which data to provide as parameters or expect as output
● How to encode protocol messages and data objects
APIs data formats
There are many different data formats used for your applications to communicate with a wide
range of APIs available on the Internet, each format represents syntax that can be read by
another machine but can be understood by humans too.

You will most likely encounter these common data formats


● YAML Ain’t Markup Language (YAML)
● Javascript Object Notation (JSON)
● eXtensible Markup Language (XML)

The most common uses for each language are:


● XML: Transformation with XSL, Applying XML schemas
● JSON: Communication server-web page, configuration files
● YAML: Configuration files

Importance of a Data format

Know your audience, understand if the format is meant to be Human readable or Machine
readable. This is the reason why we will be using the three aforementioned data formats. In
general, you can represent any type of data in any of the data formats.

XML has been recognized as not so humanly readable, it is mostly used to interchange highly
structured data between applications (Machine-to-machine communication).

JSON serves as an alternative to XML as it is often smaller and easier to read, is mostly used to
transmit data between a server and a Web Page, JSON is basically a subset of YAML.

YAML is for people that are starting in code writing.

Data formats are the Foundation of API, they define the syntax and semantics, including
the constraints of working with the API.

A syntax is a way to represent a specific data format in textual form, regardless of the syntax
used, each data format has a concept of an object.
You can think of an object as a packet of information, an element that has characteristics, an
object can have one or more attributes attached to it.

Many characteristics will be broken down to the Key-Value concept, these values will
often be separated by a colon.
The key often identifies a set of data and is placed on the left side of the colon.
Values are the actual data that you are trying to represent, this data usually appears to the right
of the colon. Can be String, Integer, Array/list, Bool, Object.

As you can see on the images above, we are representing the same Data in different data
formats, so it really comes to two factors when considering which data format to use.
● If the system you are working on prefers one syntax over the other, choose that one
● If the system supports any syntax, work with the one you feel the most comfortable
XML

Extensible Markup Language. It is similar to HTML as both are markup languages, meaning that
they indicate which parts of a document are present and not how the data is going to be shown
in your system.

For that purpose, XML code heavily uses <tags> </tags> to surround elements in form as
<key>value</key>, all the information of an object will be indicated by the opening tag and will
end with the Slash </tag>

An object usually contains multiple other objects inside it, as depicted, an object can contain
very basic information or more complicated data with tags being nested inside that object.
XML is a human readable data structure that applications use to store, transfer and read data

Whitespace in XML is not important, sure you can see a XML file with indentation but that’s only
to make it easier for humans to read it.

A list in XML can be composed of XMl objects using repeated instances of <tags></tags> for
each element
XML namespaces
It is possible to encounter applications that use the same tag to represent a completely different
information at the same time, solving that issue requires the use of namespaces and prefixes

In this example, <table> exists in 2 different XML documents, and although the starting tag is
the same, the nested information differs between each document which might cause a name
conflict when parsing the data, to avoid said issue, you can define a prefix and a namespace for
each element.
A prefix is an alphabetic character or a string put before the actual tag name followed by a colon
(<a:tagname> or <b:tagname>), this way you are defining an exact tag name for your
application to parse.
When using prefixes, we need to define namespaces for those prefixes, the name of a
namespace is a Uniform Resource Identifier (URI).
Namespaces are defined with the xmlns attribute in the starting tag of an element and a syntax
like: xmlns:prefix=’URI’
JSON

JSON was derived from the JavaScript programming language, JSOn uses curly braces, square
brackets, and quotes for its data representation. Typically the very first character in a JSON is a
curly brace “{“ defining a new Object structure, below that other objects can be defined in the
same way, starting with a name of an object in quotes following a curly and a curly bracket.
Unlike YAML, whitespaces are not important in JSON, in here, you are free to choose which
kind of formatting style you want to use, as long as other syntax rules remain satisfied.

JSON file format is similar to YAML as it contains Key-Value, in this data format, every object
starts and ends with a Curly bracket “{,}“, JSON uses [] square brackets, to demonstrate
Arrays or lists.

Whitespaces are not relevant to JSON

it is important to note that all values attributed to the user in the example are separated by a
comma. Separating values by comma is obligatory for all objects in a list except for the last one.
YAML

Is not a markup language like XML, it is more humanly writable and readable.

Whitespaces are significant to YAML because whitespace indentation defines a structure of a


YAML file.

In the following example, the first object that is indented is “name” which is a child node of
“user”. All data at the same indentation level are attributes of the same object. The next level of
indentation starts at location. Bear in mind that TAB indentations are not allowed in YAML
because they are treated differently by different tools.

In YAML, keys and values are being separated only by a colon and space, being very intuitive
for humans, YAML will try to assume which data type is intended as the value, so no quotes are
necessary. As there are no commas at the ending of any of the values attached to the key,
YAML automatically knows when there is an end of a value. Also, “-” dash syntax denotes lists.
Put a dash with the same indentation space in front of every element to add it to the same list.
Introducing Network Based APIs

HTTP Overview
HTTP is an application layer protocol and is the foundation of communication for the World
Wide Web. It is based on a Client/server computing model, where the client (a web Browser),
and the server (Web Server) use a request - response message format to transfer the
information.

HTTP operates at the Application layer of theTC/IP model, using TCP. HTTP is stateless or
connectionless by default.

The information is media-independent. Any type of data can be sent by HTTP, as long as both
the client and the server know how to handle the data content.

The following is the process of the request-response cycle:

● Client sends an HTTP request to the web


● Server receives the request
● Server processes the request
● Server returns an HTTP response
● Client receives the response

HTTP requests do have some constraints. They are limited in zied and URL length and will
return an error if the size is exceeded. Very long URLs (more than 2048 characters) and big
headed (more than 8 KB) should be avoided.

HTTP URL

HTTP requests use URL to identify and locate the resources targeted by the request. The
“resource” term in the URL is very broadly defined, so it can represent almost anything: a simple
web page, an image, a web service, or something else.
URLs are composed from predefined URI components

Taking a look at the image we can identify the following:


1. Scheme: Each URL begins with a scheme name, usually referring to HTTP, HTTPS,
mailto, data and so on.
2. Host: Can be a Qualified domain name, IPv4 or IPv6
3. Port: Optional parameter that specifies connection port.
4. Resource Path: Sequence of hierarchical path segments, separated by a slash, is
always defined, although it may have zero length.
5. Query: Optional parameters, preceded by question mark (?)
6. Fragment: Also an optional parameter, fragment starts with a hash (#) and provides
directions to a secondary resource (for example, a specific page in a document). It is
processed by the client only.

Two commonly mentioned terms in relation to URLs are URNs and URIs
● URI - identifies a resource: ../people/alice
● URL - also tells where to find it: http://www.example.com/people/alice
● URN - identifies a resource using a (made-up) urn scheme: urn:people:names:alice

URI however is used to unambiguously identify a resource and is a superset of URLs and
Uniform Resource Names, which means all URNs and URLs are URIs, but not vice versa.

Take for example the following URI:


http://maps.googleapis.com/maps/api/geocode/json?address=sanjose

● HTTP:// or HTTPS:// define whether secure or open Http protocol


● Maps.googleapis.com, refers to the Server or host, which resolves to the IP and port to
connect to
● /maps/api/geocode/json, is about the Resource, which is the location of the data of
interest on the server
● ?address=sanjose, which refers to Parameters, which are details to scope, filter, or
clarify a request, often optional, the “?” specifies where the parameters will start
HTTP Methods

HTTP methods which can also be known as HTTP verbs or HTTP nouns are a predefined set of
request methods that represent desired actions that should be performed on the resources.
They are used in HTTP requests as part of the request line.

Often known as Typical purpose or CRUD

● POST - Create, Used to create a new Object


● GET - Read, Retrieve resource details from the system
● PUT - Update, Used to replace or update a resource
● PATCH - Update, used to modify some details about a resource
● DELETE - Delete, removes a resource from the system

HTTP Status Codes

Depending on the code, you can determine what type of response you are getting, the most
common are
● 1xx - Informational, usually means that the request is still being processed
● 2xx - It means successful, the most common will be 200, 201 and 204
● 3xx - Redirection, as it name mentions, indicates that the resources have been moved to
a different location
● 4xx - Client error, could be to a malformed request, user is not authorized or the host
could not find anything matching the URI
● 5xx - Server error
To determine if a call worked or not, we rely on the Response status, often, there will be only 2
● 200 - Ok, All looks good
● 404 - Not Found, Resource not found

But they can be expanded to


● 200 - Ok, All looks good
● 201 - Created, New Resource created
● 204 - Server fulfilled the request and the response body is empty
● 400 - Bad Request, Request was invalid
● 401 - Unauthorized, Authentication missing or incorrect
● 403 - Forbidden. Request was understood, but not allowed
● 404 - Not Found, Resource not found
● 500 - Internal Server Error, Something wrong with the server
● 501 - Not implemented, the server does not support the functionality required.
● 503 - Service Unavailable, Server is unable to complete requests

HTTP Headers

Are a list of key-value pairs that the client and server use to pass additional information or
metadata between them in requests, the most common headers are:

● Cache-control: Specifies caching parameters


● Connection: Defines connection persistency
● Date: A datetime timestamp
● Accept-(*): Defines the preferred response Type
● Authorization: Usually contains a Base64 encoded authentication string, composed of
username and password for basic HTTP authentication
● Cookie: Contains a list of key-value pairs that contain additional information about the
current session user, browsing activity or other stateful information
● Host: Used to specify the internet host and port number of the resource being accessed
● User-Agent: Contains the information about the user agent originating the request.

Response Headers

The response headers hold additional information about the response and the server providing
it.

The most common are:

● Age: How long since the response was generated


● Location: Redirects the client to a location other than the requested
● Server: Information about the software used by the origin server to handle the requests
● Set-Cookie: Send cookies from the server to the client
The following are Entity headers, which contain information about the response body.

● Allow: Lists supported methods identified by the requested resource


● Content-Type: Indicates the media type of the body (also called Multipurpose Internet
Mail Extensions (MIME) type), sent to the recipient. Used for content negotiation.
● Content-Language: Describes the language of the intended audience for the enclosed
body.
● Content-Length: Indicates the size of the body
● Content-Location: Supplies the resource location for the entity that is accessible from
somewhere else than the request URI
● Expires: Gives the datetime after which the response is considered stale
● Last-Modified: Indicates the date and time at which the origin server believes the
variant was last modified.

HTTP Content Negotiation

HTTP is used to deliver a wide variety of different content that varies in language, size, type and
more. Because supplying all content with every request is not practical and the remote content
format is not always known, HTTP has provisions for several mechanisms for content
negotiation. The content is returned on various types of “Accept” request headers, these
headers will specify the preferred resource representation. If the representation is not yet
implemented on the server, it will return a status code “406”.

The following is an example of a basic HTTP Content Negotiation

Client asks the server for a Particular resource using a GET requests while specifying that
the response should be in text/html
Server responds with a 200 status code, which means ok

However, when the client asks for a particular resource and requests a format like Json, as the
resource is not implemented in the server, it will return a status code 406.
As you would’ve thought HTTP headers take care of the content negotiation. The most
commonly used is Accept, which denotes the preferred media type MIME type for the
response. A media type represents a category and the subtype, a general type can either be
discrete (Representing a single resource) or multipart, where a resource is broken into pieces,
often using several different media types (multipart/form-data).

Some useful discrete general types are:

● Application: Any kind of binary data that does not fall into other types, generic binary
data has the application/octet-stream, while the more standardized formats include
Json or XML, application/json or application/xml
● Audio: audio or music data, for example audio/mpeg
● Image: Image or graphical data, including both bitmap, git, image/bmp, image/gif
● Text: Text-Only data, includes text/plain, text/javascript or text/csv
● Video: Video data or files, like video/mp4

Accept can also be used with the following combinations:

● Accept-charset: Sets the preferred character sets, such as UTF-8 or ISO 8859-1
● Accept-Datetime: Requests a previous version of the resource, the value must always
be older than the current datetime
● Accept-Encoding: Sets preferred encoding type for the content
● Acce-Language: Preferred natural language

Here is a quick example on how the headers might provide different data for a similar request.

As you can see, the encoding varies depending on the preferred method (XML or JSON) but
also the same information varies depending on the language (British english vs American
English)
APIS
An API is a way for two pieces of software to talk to each other, API stands for Application
Programming Interface.
When humans were the only users of systems, the system would display information in the form
of a User interface, today as Systems or programs may consume other programs data, and to
communicate between each other, they will use APIs.

API are the sets of requirements that govern how one application can talk to another.
APIs help developers create apps that benefit the end user.

The most commonly used are RPC-Style APIs and REST APIs

RPC-Style API

RPC stands for Remote Procedure Call, and as the name suggests, they call a remote
procedure located in a different address space (similarly as to how it would call a procedure
locally). A different address space can either be on a different computer or a different network.

The client sends a call to the server and then waits for the server to return a reply message.
Once the reply is received the results of the procedure are extracted and the client execution is
resumed. RPC calls can be executed asynchronously.

As the procedures are executed remotely, you should be aware of the following:
● Error handling remotely is different than local error handling
● Global variables and side effects are sometimes unknown or hidden to a client
● Performance of remote procedures is worse than local procedures
● Authentication might be necessary as sometimes calls are transported over insecure
networks
Because RPC API is only a style of building an API, many different protocols have evolved that
implement remote procedure calls

Simple Object Access Protocol (SOAP)

● Standard designed by Microsoft


● Used to build Web Services (Software available over the internet)
● Typically uses HTTP, and dependent on XML
● Sometimes considered complex and rigid

Has three main characteristics

● Extensibility - Features can be added without major updates to the implementation


● Neutrality - Is not protocol specific and can operate over several different transport
protocols such as HTTP, TCP, SMTP, and so on.
● Independence - Supports any programming model, platform and language

The Simple Object Access Protocol specification defines the messaging framework which
consists of four parts:

Envelope: Identifies the XML as a SOAP Message, is required


Header: Contains SOAP header information, is not required
Body: Body of the message, contains remote call, parameters and response, it is required
Fault - Provides information about any error occurred, is not required

XML- RPC and JSON - RPC

● Simple frameworks for communicating over HTTP


● RPC = Remote Procedure Call (When one system requests another system to execute
code)
● Offer XML and JSON data formats respectively

Representational State Transfer (REST)

● API framework intended to build simpler web services than SOAP


● Another use for the HTTP protocol
● Popular due to performance scale, simplicity, and reliability
● Technically an APi Framework
Other APIs out there

NETCONF (Network Configuration Protocol)

● Designed as a replacement for SNMP


● Standardized in 2006, updated in 2011
● Leverages SSH and XML
● Defines transport and communication
● Titled coupled to YANG for data

RESTCONF Protocol

● Provide REST API like interface to network


● Standardized in 2017
● Supports XML and JSON
● Defines Transport and communication
● Titled coupled to YANG for data
REST APIs

● REST is another use for the HTTP protocol, stands for Representational State Transfer
● APi framework built on HTTP
● APIs often referred to as web services
● Popular due to performance, scale, simplicity, and reliability

In a REST API structure, we need to think in Requests and Response, the flow is the same, no
matter if you are programming a network interface or consulting information from the weather.

A look under the Hood of REST APIs

URI: What are you requesting?


URI stands for Unified Resource Identifier, there are many components that need to be
addressed.

Headers: Details and Meta-data

The following will be the most common headers that you can find when using APIs, along with
some example values

● Content-Type, Specify the format of the data in the body


○ Application/Json
● Accept, Specify the requested format for returned data
○ Application/Json
● Authorization, Provide credentials to authorize
○ Basic
● Date, Date and time of the message

They are used to pass information between client and server


Included in both REQUEST and RESPONSE
Some APIs will use custom headers for authentication or other purpose

Data: Sending and Receiving

● Data is contained in the body


● POST, PUT, PATCH requests typically include data
● GET responses will include data
● Format typically JSON or XML
○ Check Content-type
HTTP Authentication and Security

None: The Web API resource is public, anybody can place calls.
Basic: A username and password are passed to the server in an encoded string
Token: A secret generally retrieved from the Web API developer portal
OAuth: Standard Framework for a flow to retrieve an access token from an Identity Provider,
can be identified as Bearer

Network Programmability with RESTCONF

● -u provides user:password for Basic Authentication


● -H to set headers
● Lines beginning with “>” indicate Request elements
● Lines beginning with “<” indicate Response elements

There are many tools to work with REST APIs

● Curl: Linux command line application


● Postman: Chrome browser plugin and application
● Requests: Python library for scripting
● Swagger: Dynamic API documentation
● Browser Developer Tools: View Traffic and details within browsers

We will focus on Postman as it has the following benefits

● Quickly test APIs in GUI


● Save APIs into Collections for reuse
● Manage multiple environments
● Auto generate code from API calls
● Standalone Application
Here is the regular GUI of Postman

But we will split the parts to better identify how it correlates with APIs

First we have the API method and URI, there are also buttons to send the Method and to save it
for further usage

Moving further, we can manage the request authorization, headers and data (body)

Here is an example of adding parameters and how it reflects into the URI

Once the request has been sent, we will receive a response, in the response, we can see the
status

Using Environments

Variables make requests reusable and flexible, as it is never good to hardcode details, for
example, What if you want to connect to a different host? Or what if the credentials change?,
this is what Variable References are used for.
We can make URIs, Users or Passwords be added as variables by using “{{}}” double curly
braces.
Environments are meant to store variable references and use them in different workspaces.
We can also use Dynamic Variables (for example when you need to take a response and pass it
over to another request).
Introduction to Python
Python was created by Guido van Rossum and was first released in 1991, it has gained
popularity due to the fact it is fast, powerful, platform independent and open source.The syntax
was designed to be clear and readable. Python is widely available, whether you are on Linux,
Mac or Windows, you will even find Python available on multiple routers and switches across
Cisco Platforms.

Python scripts are simple (UTF-8) text files, you could write a script on any text editor available,
however to compile it and execute it, you would need a Python Interpreter
To check if you have already Python installed, try running the python --version or python -V
command from your console window

If Python is installed, you will be presented an output such as the previous, if not, then you have
to install it manually.
You can use python -i to access Python interactive shell
There are some useful commands that can be used on the Interpreter
● dir(): Return all variable, classes, objects available
● dir(name): Return attributes for an object
● help(name): Built-in help system. Displays docs and info for the object
Virtual Environments

Python has a feature called Virtual environments which allow it to run specific Python versions
for specific projects. They are isolated, which is useful since it allows PIP install on a specific
environment, without them, all libraries would be installed globally.

To start working with Virtual environments, you need to install the library from the Python
Package Index pip install virtualenv
Then create a virtual environment using python -m venv <name> (in windows), Linux
would be created as virtualenv name, this will create a new folder using the name provided
with the following structure. It has to be done on the command line.

Pip install libraries will be installed on /lib directory


Pip install executable scripts will be installed on /bin directory
To specify the Python version use virtualenv name --python=python”n”

To activate your virtual environment use the following command source name/bin/activate,
however, in Windows, to activate a Venv, you use name\scripts\activate
You can deactivate your virtual environments by using deactivate.
Basic Python syntax

Basic Data types

As you would’ve thought, Python has a basic data type to store any value you need and also,
has a full set of operators to work with numeric and string types

Python is introspective. You can see what type of data something is by using type(). For
example
Let’s use integers

How about a string? (remember to use either ‘ ’, “ ”, ‘’’ ‘’’, or “”” “””)
And with a boolean?

At last, what kind of data would be 3.1416?

Let’s work with Operators

How about concatenating ACD EFG HIJ

How about copying a string “n” times?

Python will respect spaces

There are also some powerful methods available with string objects.

“{}”.format(): lets you insert named or unnamed placeholders {} in a string and then use the
.format() to insert values into those placeholders.

“ ”.split(): This method lets you split a string into multiple strings using a separator that you
provide as the delimiter.

On the other hand, “”.join() method lets you put a sequence of strings, joining them with a
separator you define.

Defining variables
Python is not a strongly typed language. Which means that you don’t have to declare a variable
before using it.
To define a variable, simply use the assign “=” operator to assign a value to the variable name

Bear in mind that Variable names should follow next rules


Cannot start with a number [0-9]
Cannot conflict with a language Keyword
Can contain: [A-za-z0-9_-]

In Python, unlike other programming languages, everything is an object. Objects are purpose-
built groupings of variables (called attributes) and functions (called methods) that work
together to do something useful.

To access the attributes (variables) or methods (functions) contained within an object, all you
need to remember is that you use “.” dot-syntax to do so. Putting a period after any object
lets you access the attributes and methods available within the object

Input and Output


Use input() function when requesting the user for some information. It will display your prompt
to the user and return their input, which you can assign to a variable. Input will always return a
string(str)
Use the print() function to display output to the user. It accepts a variable number of
parameters, converts them all to strings (str), and joins them together with a space “ “
separator between each of the items. Is a quick and easy way to combine several items and
display the output to the user.

Reading from and Writing to files

First, you should store the file that you want to open in a variable

Then, we would create a Function to read the log file and use a function that is always available
to Python called open, to use it, you need to specify whether you want to Read (“r”), Write
(“w”) or Append (“a”).

You can see a with block which is useful as it closes the file, otherwise, we would need to
manually specify when to close the file, but you need to specify a variable for the with block (f in
this case).

Writing to a file is fairly similar, as we would use the open function, as a good practice, we
could use the with block.
With variables comes the possibility (likelihood) that you may not know what they point to, or
perhaps you want to branch your code and take different paths based on some condition. That’s
why we have if and elif (“else if”) statements, you need to add a “:” at the end of each if
expression.

The above if syntax reads


1. If expression 1 is correct, do the following block of intended statements
2. Else if expression 2 is correct, do the following block of intended statements
3. Else, do the following block of statements

There are operators for comparison and logical expressions

You can combine expressions with and or or. And negate expressions with not
Functions

One of the main rules of Python is Don’t Repeat Yourself (DRY). This is important in the case
you are writing identical or fairly similar blocks of code in your script or scripts, then it’s time to
create a function. Functions let you write a piece of code once, give it a name and then call that
piece of code whenever you need it. They can (optionally) accept input arguments and return an
output allowing you to create operations that take some inputs and return some output
according to your needs.

When defining a function, arguments are variable names that you create. Later on, when you
call your function, you will pass in values for each of the arguments and these values will be
assigned to the variable names you defined. Your function may optionally return some value

A function doesn’t have to have any arguments or return any values. You could simply be
calling a function to do some predefined set of actions. Note that a function implicitly returns
NONE. If it reaches the end of the function’s block of statements. None is the closest thing
Python has to a null value.
Native Python Data Structures

Python comes fully equipped with a full set of native data structures, some of them are Lists,
Tuples and Dictionaries.

Lists are an ordered list of items, it is mutable, which means you can add or subtract values into
it, they are identifiable with the “[ ]” Square bracket icon.
They are analogous to arrays in other Programming Languages. However, it can contain
elements of different data types. The elements are separated with a “,” (comma)

You can then access and update the elements within the list via the element’s index, indezes
starts at 0. You can also append items to a list with the .append() method.

Tuples are just like lists, in which they are an ordered list of items, however, they are not
mutable (cannot be changed), they are identifiable with “( )” brackets icon, just like in lists,
elements are separated with a comma. Indexes start with 0

Dictionaries are an unordered list of items (called key-value items) keys don’t have to be the
same data type, values don’t have to be the same data type. Keys are unique; must be
immutable. Can be identified with “ { } “ curly brackets, are analogous to hashes in other
programming languages, you separate a key from it’s value with a colon “:” and the key-
value pair with a comma. Dictionaries are rather different as you use the Key as the Index

Keys: Have an important restriction, whatever you want to use has to be immutable and hash-
able, this means you could use a tuple as a key but not a list as a key
Values: Can be anything, and like lists, the types don’t have to be consistent

It is important to consider that in order to access a Key of the dictionary, you need to use “[ ]”
Square brackets

A cool functionality that Python has is the option of creating a List of Dictionaries, which
combines the 2 options. We will see first the Square brackets and then we will see the Curly
braces, each one indicating a Dictionary, each Dictionary needs to be separated with a comma.
Other Python Collections

In addition to native data structures, Python includes a collections module in the Standard
Library.

OrderedDict Collection

Works almost like the native dict data structure, with one enhancement, they maintain the order
of the items added to them. Since it’s part of the collections and not a Native data structure, you
need to have it installed and import it into the script.
To import use from collections import OrderedDict

As you can see the values are stored in a similar fashion to a regular dict but with some
differences:
We have to import the collection
We added the key-value pairs which are stored in order, however to add them, we used “[ ]”
and then “=”
The contents will look like a list of tuples
Python Loops

In Python, for loops are so useful that coders rarely use “while”. A Loop for iterates through a
sequence or collection, essentially taking one item at a time and running the block of statements
until all of the items have been iterated through (or you manually break out the loop from within
the code block).

In this syntax individual_item is just a variable with a name of your choosing, and an iterator is
something that can be iterated. Many things can be iterated, but for your knowledge, lists,
tuples, dicts can be iterated (iterating on a Dictionary, will only iterate through the Keys,
unless you use .items() method).

Python has a built in range function that creates sequences that will be indexed.

The range() function accepts start, stop and optional step.


Here is an example of iterating through a dictionary and printing key-value pairs

There is a feature called unpacking where you can assign a collection of items to a collection of
variables.
For example

It also works for OrderedDicts

Python Script Structure and execution

When you call the Python interpreter and tell it to run a script, the interpreter does so
“Synchronously”, meaning that it starts at the top of your script and goes line by line executing
each statement one at a time. If a particular statement takes a while to complete or is waiting on
a response from some external source (like making an API call and waiting on the response),
Python stops and waits for the current statement to finish executing before moving on to the
next one.

This makes understanding the flow of your script and how to troubleshoot it much easier as
everything is predictably and sequentially loaded and executed.
Please note that a Python file may be referred with different names, the most common are:
● Script: Which denotes that the file has to be executed
● Module: Its contents are meant to be imported and used by another calling script
Let’s take a look at the following Sample script and the Interpreter’s execution process.

When we run this script from the terminal, the Python interpreter will start with the first line and
execute each statement in succession.
The flow is:

1. The “Shebang” line. The first statement in many scripts isn’t meant for the Python
interpreter. This line tells the shell attempting to run the script what interpreter should be
used to execute the script

2. The triple quoted string at the beginning of this Python script is a module docstring,
which serves as a built-in mechanism for documenting the purpose and functionality
provided by the module, the interpreter will save this string as a special __doc__
variable
3. Import statements, import other code into your script so that you can use their
functionality. When the interpreter encounters an import statement, it opens the module
being imported and, starting with the first line of that module, executes each of the
statements in that file. The contents of that module are then available to your script
through the module’s name, using dot syntax, to access the variables, functions and
classes within the module

4. Module constants, named by convention using all-CAPS variable names, they can be
changed; however, if they are in caps, you should not change them.

5. Module-level “Global” variables, every function and class within the module will have
at least “read access” to these variables as they exist at the top level within a module

6. Modules Functions and Classes, as the interpreter finds more functions and classes, it
stores them in memory to make them available to subsequent portions of your code.
Note that statements within a function are not executed until you call the Function and
provide the required arguments to execute its block of statements

7. The if__name__ == ’__main__’ block: Since some Python files can be executed or
called (Script/module). When a script is executed, it is given the internal __name__ of
__main__ by the interpreter. All other modules, when imported, will see their __name__
as their own module name, allowing us to use the __name__ variable to determine at
runtime if our Python file is being executed or imported. You can call your “main”
function whatever you like.
Imports

You will see two different syntaxes used for importing functionality into your script
● Import statement: import os
● from __ import__ statement: from os.path import abspath
In the first, we are importing the functionality of the module “os” however, to access the
functionality of the abspath function, we would need to use the whole dot syntax
(os.path.abspath())
In the second option, we are importing the functionality abspath, which removes the need to use
dot syntax, allowing us to call it directly using abspath(). From __ import__ statement
provides a way for us to pull in only the functionality we need and simplify the names of deeply
nested resources.

Variable Scope

Variables are said to be defined in the scope in which they are created

● Module scope: These are created outside of any function or class, so are accessible to
all functions and classes created within the module.
● Local scope: Variables defined within the function or class, they may only access the
statements within the local scope in which they were created. Function arguments are
locally scoped variables

Python Libraries

In Python, you don’t need to write every single piece of code, you will discover many relevant
code samples and training resources publicly available on the internet to do some common
activities.
A Library is practically any code outside your script that you want to use, just by importing a
library into your Python code, you get a wide range of features that the library provides without
needing to develop them yourself.

If you want to check which libraries are included in your chosen version of Python, check the
website https://docs.python.org/3/library.
Importing packages is as straightforward as using the “Import library” command run inside
your Python code, sometimes you might only need a specific resource inside a library. In this
case, you would use “from library import resource”.

In addition to the standard library, there is a growing collection of packages, libraries and even
entire applications available on the Python Package Index (PyPI). It is a repository of Python
software that you can take advantage by installing and importing them into your script.

The “pip install library” command gives you the ability to install and download all the available
libraries directly from PyPI.

The package manager for Python (PIP) is already installed if you are using Python version 3.4
or newer. To be sure it’s present you can issue the command “pip --version” on your cmd

If you are unsure about the package name or if a certain package even exists, you can always
browse through all packages available on the website https://pypi.org or issuing the command
pip search on your console window, after finding the needed one, you can use the command
pip install <libraryname>, however, it is not the only one.

● Pip install - Installs a package


● Pip install -- upgrade package - Upgrades a package
● Pip uninstall package - Uninstall a package
● Pip freeze - View all packages installed
● Pip install -r requirements.txt - Install requirements.txt

Pip install should be run on your console window and not on your code.
Another option when searching for a specific library is accessing GitHub repository, from there
you can see the popularity of a specific library and when it was last updated.

Foundational Libraries

● Pretty Print - Better formatting than default print () function, use pprint()
○ From pprint import pprint
● Python Interpreter Utilities - Access to some details/variables concerning running state
○ Import sys
■ Access command line arguments with sys.argv[]
■ Access to functions that interact with the interpreter sys.exit()
● Operating System Interfaces - Interact with Files, Paths and Environment within the OS
○ Import os
■ Change directory os.chdir(“”)
■ Get working directory os.getcwd
■ Reading or setting Environment variables os.environ[var_name]
● Date and Time Utilities - Create, format, and manipulate dates and times, work with
timestamps and other representations
○ Import datetime
■ Shows the time in x amount of time from now
datetime.timedelta(timerange=x)
■ Converts the time into a string readable by humans
function.strftime(date_display_format)

Useful Python Libraries for Network Engineers

There are a number of libraries on the internet that can help you deserialize or parse
information from a wide variety of data formats into Python Data structure.
When you are searching for a specific library for parsing your chosen data format you might
notice there are many libraries available, it is a good practice to check the documentation and
validate which works best for your project.
Some commonly used libraries to work with data formats are shown in this table:

For YAML, there are 2 main libraries in which the main difference is that PyYAML only works for
YAML 1.1, whereas ruamel.yaml is useful if YAML 1.2 is used.

JSON is part of the Python standard library, therefore is included by default, you only need to
use the import statement under your code.

When it comes to XML, there are several libraries available and they all differ in some way, so it
is worth taking a look at them. However, to work XML files as dictionaries, you can always use
xmltodict
XML File manipulation with Python

As you saw, xml files can be treated with many different libraries

● Untangle serves to convert XML files into Python elements represented as nodes and
attributes
● Minidom is more frequently used when you are familiar with the Document Object Model
(DOM)
● Xmltodict is for an experience similar to working with JSON or YAML files, which convert
the XML into Python Dictionaries
● Lastly, ElementTree represents XML data in a hierarchical format represented in a tree
structure, it is often described as a hybrid between a list and a dictionary

For this example, we will be working with xmltodict library.

It has functions to parse and unparse information (we will talk about Parsing and unparsing
later)
● xmltodict.parse(xml_data)
● xmltodict.unparse(dict)
Here is an example of how the script would work

As any library, first you need to import the library (if you haven’t installed it, then that would be
the first step), you can always take advantage of Pretty print (pprint()).

As you can see, we used the method variable1 = open(“filename”).read() to open the file and
read the contents, we then print the output with prettyprint.

As far as Python concerns, all the contents of the xml file are treated as a single string, that is
the reason why we would use variable2 = xmltodict.parse(variable1), this gives us access to
access the information of the xml as a Python dictionary.
There will be occasions in which we need to revert from dictionaries to an XML file, to do so,
use xmltodict.unparse(variable2).

JSOn File manipulation with Python

JSON and Python work together really well, as Json is already a part of Python libraries
available to use, you would only need to import it into your script
Files in JSON have a slight difference in name, so here is a table on how they are treated on
Python.

When working with JSON, you can Parse (Deserialize) and Unparse (Serialize)
To Parse information (Make data available to Python), use json.load, after we have finished
working with the data, we can use json.dump which will pass the data back into a JSON file.
Look at the following example
To open the JSON, use open method, variable = open(‘filename’).read(), then you can use
print (Or pretty print) to show the information within the JSON file (Remember that Python will
treat all the contents as a single string), but once open, we can work with the data as a
Dictionary. Remember that when working with a dictionary, if you want to access the Keys, you
would need to use Square brackets “[ ]”. Once you finish working with the JSON file, you can
use json.dumps

YAML File manipulation with Python


It is important to know how the strings from a YAML file are going to be translated into Python
objects. The following table might help you understand better how the strings are translated

The first thing to do would be to install the preferred YAML library (either ruamel.yaml or
PyYAML), once installed, we need to import it.

Take a look at the following example

The next action would be to open the YAML file and load it into a Python object by defining a
variable (just like in XML or JSON), also you can use the open method.
Just like in the other Data Formats, you can load the info from the YAML file so they will be
treated as a Dictionary (to access Keys in dictionaries, you will use []).

To convert the file into a Python dict, use variable = yaml.load(file).


To convert the dict into a YAML file, use yaml.dump(variable).

Importing Spreadsheets and Data with CSV

Just like JSON, Python already includes a library to work with CSVs, which offers you the
possibility to treat CSV data as lists.

To start using it, simply import the Library import csv


Here is an example of using the library

The main difference would be that information within the CSV file would be accessed through
the rows and lines which will be indexed (Starting from 0)

First open the file with the open method, then start treating csv data as a list using
csv.reader(filename)

API Libraries

● Rest APIs - requests


○ Pip install requests
○ Import requests
● NETCONF - ncclient
○ Pip install ncclient
○ Import ncclient

● Network CLI - netmiko


○ Pip install netmiko
○ Import netmiko
Requests Library, is used to make HTTP calls using Python, it is a full http client, simplifies
authentication, headers and response tracking. It is great for REST API calls, or any HTTP
requests.
To utilize it, simple use requests.method(), example, requests.get()

YANG Data Model

It handles all details including authentication, RPC, and operations, which is formatted in raw
XML.

When CLI is the Only Option - netmiko

● Used when no other API is available


● Build on paramiko library for SSH connectivity
● Support for a range of vendor network devices and operating systems
● Send and receive clear text

Serialization and Deserialization of Data


Serialization means converting a data structure or an object into a binary or textual format that
can be stored and reconstructed later, serialized files can be YAML, JSON, XML or any other
textual data format, the serialized file can now be transferred to any other system over the
network.
The receiving system will be able to open that file and reconstruct it to the original state with all
the objects defined inside(deserialization).
Like saving data to files, APIs also require reliable serialization and deserialization to exchange
data. Most programming languages, like Python, have existing tools for working with various
data formats.

Serialization would be when we are working with a code in Python to configure an appliance,
and we want to send the data held in our objects to the appliance API to a format that will be
unmistakably understood, to do so, we convert our Python code into a data format (XML, YAML,
JSON) that the appliance can read.

Deserialization would be when we are trying to get specific information from that appliance
through the same API, we will receive a Data format (XML, JSON or YAML), and we would use
the deserialization process to make the data understandable by Python, this process is also
called Data or file parsing

Debugging basics
When things go wrong, the Python interpreter provides very useful information about what
happened -if you know how to read what it is telling you.

Let’s see an example of a Stack Trace

The stack trace starts with Traceback and ends with the generated error message, in this
example NotImplementedError

In order to read and understand what a Stack Trace is telling you:


1. Read the last line first, as it specifies what went wrong, sometimes it will be clear but
other won’t
2. Review the call stack from Top to Bottom, this will show you where the error has
occurred. The details provided help you locate the calls and statements involved in the
issue.
a. They tell where each statement is found (file and number)
b. They display the statement for your reference

Advanced IDE’s (Integrated Development Environments) have built-in debuggers that help you
interactively debug and explore your code by pausing the execution of your code and letting you
inspect the values of variables and incrementally step through the executing code; one
statement at a time.
The road to Model Driven Programmability
It is important to mention that Networks are no longer Isolated, REST APIs, Web Application,
Dev Ops dev cases, each time there are more people and services looking to access the
network either to do configuration, getting the health of an application or just operational status
of the network itself.

Of course, we have SNMP, which works reasonably well for device monitoring, it offers
counters, operational data, statistics, however it is not often used by users due to a number of
reasons.

However, we have the Model Driven Programmability, which still presents Device Features, but
it breaks Configuration and Operational data, providing Vendor and Standard information,
and then we use YANG data models to articulate what is the information we are going to use.

Yang data model is how we describe the elements that we will be talking which will rely on the
Transport Data models (NETCONF, RESTCONF, gRPC), which will access the YANG data
models

What is YANG

Is a Modeling Language with the following Features


● Module that is a self-contained top-level hierarchy of nodes
● Use containers to group related nodes
● Lists to identify nodes that are stored in sequence
● Each individual attribute of a node is represented by a leaf
● Every leaf must have an associated type
Refer to the following example for YANG data

But, what is a Data Model, well, is simply a well understood and agreed method to describe
something. We didn’t use an agreed method to describe aspects of the network, and that is
why YANG came for.

Yang can describe the following

Device Data models (Layer 2), such as:


➔ Interface
➔ VLAN
➔ Device ACL
➔ Tunnel
➔ OSPF
➔ Etc

And

Service Data Models (Layer 3), such as:


➔ L3 MPLS VPN
➔ MP-BGP
➔ VRF
➔ NETWORK ACL
➔ System Management
➔ Network Faults
➔ Etc
Working with YANG Data Models
There are Standard Definitions, which come from organizations like IETF, ITU, OpenConfig, etc,
which are compliant with the Standards, or there are some Vendor Definitions, from
organizations such as Cisco, which are unique to Vendor Platforms.

To get the modules, simply go to https://github.com/YangModels/yang, this repo contains all the
Data Models, both Standard and Vendor Definitions.

YANG can be displayed and represented in any number of formats depending on the needs
● YANG data format
● Clear Text
● JSON
● XML
● HTML/JavaScript

There is a Library in Python calle pyang which helps to validate data models or display data
models to display data.

The following example displays the use of pyang

YANG data sent to or from a network device will be formatted in either XML or JSON depending
on the protocol (NETCONF or RESTCONF)
When using NETCONF as transport Protocol, we will be retrieving the data as XML file, as
NETCONF, only works with XML

Understanding NETCONF

Initial Standard on 2006, updated in 2011, doesn’t define explicitly the content with which we are
working. The interesting part is that we could use NETCONF with other Data models aside that
YANG.

● In the transport aspect, NETCONF inherits the security aspects of SSH


● Messages determine how do we actually make the communications, using Remote
Procedure Calls to take data or take action
● Operations, provide data, be it organizational data or configuration data

To make a remote connection using SSH, we could use the CLI, issue ssh user@ip -p 830 -s
netconf, port 830 has been standardized as the netconf protocol.
Once logged in successfully, the remote service will reply with a Hello message and it’s
capabilities, then our client will reply with a hello message (both will be formatted in XML).

The communication is between a client called NETCONF Manager, and the remote server will
be called NETCONF agent, we will send an RPC message and the agent will reply with a reply
message.
In NETCONF, we have a series of operations we can perform
● <get> Retrieve running config and device state config
● <get-config> Retrieve all or part of specified configuration data store
● <edit-config> Loads all or part of a configuration to the specified configuration datastore
● <copy-config> Replace an entire configuration data store with another
● <delete-config> Delete a configuration data store
● <commit> Copy candidate data store to running data store
● <lock/unlock> Lock or unlock the entire configuration
● <close-session> Graceful termination of session
● <kill-session> Forceful termination of the session

NETCONF allows the use of several Data Stores, which are entire or partial configurations, we
could use several Data stores to stage elements. The minimum Data Store that should be
available is <running>, other data stores such as <startup> <candidate> od <URL> are
optional.

Python has an available library to use NETCONF, which is ncclient, allowing simple connection
and communication, dealing information in raw xml.

To open a connection using NETCONF, we can use manager.connect() (manager is our


device and agent would be the remote server to which we want to connect). We would
need to include certain parameters, such as Host, port, user, password and we can use
hostkey_verify=False to bypass validation of self signed certificates.

On the following piece of code, we can print all the capabilities, usually, you will be presented
with two different types of Capabilities
● Base NETCONF capabilities - what features are supported by the device
● Data Models Supported, which will include, Model URI, Module Name and Revision
Date, Protocol Features, Deviations (modification to the standard model that are
supported).
The following is an example of how can we automate Network with NETCONF

1. We are opening a XML file where we can save all the data that we want
2. We are Saving all the data that needs to be passed to the Agent (User, Password, Port,
Host and hostkey_verify)
3. We will issue a get command
4. We would need to store all the information into valuable Dictionaries, once stored we
can access them as we would access any dictionary
5. We can then print the information (We could use format method to populate the data
using the Keys from the created Dictionary
To modify the information of an interface, we could use the following example

1. We open the file we would like to modify


2. We then modify the values that we want
3. We use the with method to open a connection and gracefully close it once we finish
using it, we would need to add the usual data as port, user, etc.
4. Then we would need to send the information, there are 2 important aspects we would
need edit_config method and data store, in this case running
5. Then we can print the new data

Learn to CRUD with GET, POST, PUT, PATCH and DELETE using
RESTCONF
Is an HTTP-based protocol that provides a REST API like interface designed to access YANG,
providing JSON or XML data formats.

RESTCONF has 3 layers as part of its Protocol Stack & Transport


● Just like in NETCONF, RESTCONF is built above TCP, so it leverages its advantages
for communication
● Also, it will have operations that can Create, Update, Read and Delete, these will be the
RESTCONF Verbs GET,POST,PUT,PATCH,DELETE
● At the end we will be presented with data in the format of XML or JSON

The operations in RESTCONF are fairly similar to the ones of NETCONF as you can see on the
table above.

To provide a way to understand which is the Data Format that you will be presented, we can
use HTTP Headers, using the Methods
● Content-type: Specifies the type of data being sent from the client.
● Accept: Specifies the type of data being requested from the client.
However, these headers will not be like the usual headers you might use in another API client,
you would need to use RESTCONF MIME Types that were defined for the Protocol
● application/yang-data+json
● application/yang-data+xml

The way to construct the RESTCONF URIs would be as following:


● Address - Of the RESTCONF agent
● ROOT - The main entry point for RESTCONF requests
● Data Store -The Data store being queried
● [YANG Module:]Container - Base model container being used, providing module name
is optional
● Leaf - An individual element from within the container
● [?<options>] - Optional parameters that impact returned responses
Here is an example of a RESTCONF URI

An example of configuring your appliance with RESTCONF would be as following

You can use the PUT Method, then you edit the body of your request, so you can specify which
is the new data that you want to add.
Then, check the status.
It is important that when using RESTCONF, you can use the same URI and change between
Calls (GET, POST, PUT, PATCH, DELETE)
Nexus and it’s Native API

NX-OS relates to the Nexus Operative System developed by Cisco.


To enable the NX-API, you need to enable it, by issuing “feature nxapi”

But what is NX-API, it contains NX-API CLI and NX-API REST, which allows to either use CLI
as all regular appliances and API REST, which allow for API calls. Under the NX-API, we have a
Data Management Engine (DME) and Management Information Tree (MIT).

Under NX-API Cli, we have the JSON-RPC 2.0, which is a standardized remote procedure call
support in a Network Framework. It is not a REST API, but can run over HTTP.
What JSON-RPC 2.0 does is defining the Request and Response format only.

Aside from JSON-RPC, Cisco provides INS-API, which is a CLI based API, provides distinct
show and config methods, while also supporting Bash (Linux) commands, allowing us to choose
between XML or JSON, while also providing Better error messages.

When we enable the NX-API, not only the feature gets enabled, but also a feature called The
Developer Sandbox, which is a web page hosted on box; you can browse to it, by using
http(s)://<switch-ip/name>, using this page, you can construct API calls from CLI, allowing to get
Python script structures or templates.

When using the developer Sandbox for JSON-RPC, we can issue 2 different type of commands,
CLI or cli_ascii, what each type does is determining the Response Data
● CLI:
○ Structured Data format
○ Simpler to write code against
● Cli_ascii:
○ Same output from cli command (just like the information you would get from the
device)
○ Only good if re-using screen scraping scripts
INS-API allows the developer to determine which Data Format he would prefer (XML or JSON),
but the data model will be the same.
The command types are pretty much the same as JSON-RPC
By using INS-APi, you can also send Bash commands, but in order for that to work, you need to
enable “feature bash”.

Just like REST API, we can use POSTMAN to leverage its capabilities, however, as NX-API CLI
is not a REST API, there are some aspects that we need to consider first.
1. URL to Target - For all NX OS devices, it will be http(s)://<device-ip>/ins
2. Headers to Use - can be either of the following:
a. Content-Type: application/json-rpc
b. Content-Type: application/json
c. Content-Type: application/xml
3. Method - Can be any of the REST methods
4. Authentication, you can use Basic or Session Cookie

Unlike NX-API CLI, NX-API REST leaves CLI behind, as it targets resources (called Objects),
which have their own Properties and Methods. This NX_API REST is more powerful than the
NX-API CLI but has a bigger learning curve.

When leveraging NX-API rest, we will be working with Data Management Engine (DME) and
Management Information Tree, DME is always there, no matter which method you are using,
as it provides a way to access the data within your device. Providing API access to the
Management Information Tree.

The Management Information Tree is the way Every NX-OS is composed of. Everything in NX-
OS is an object (Object Class identifies its type), no matter if it’s a VLAN, Interface, Statistic.
Every object has a Parent/Child Relationship (Every Child will have only 1 parent); while
everything builds from “root” (AKA “sys”).

Be aware that Objects have 2 different names


● Relative names (RN)
○ Identify object related to siblings
○ Unique within a parent Object
● Distinguished names (DN)
○ Unique identification within Management Information Tree
○ Series of Relative Names building to “sys”
For example refer to the following image

Smith would be the Relative Name as it refers to the Parent and it’s Siblings
The whole name (Either Tom Smith, Jill Smith or Mr. Smith), would be the Distinguished Name,
as it uniquely identifies the object in the NX-OS

The following will be a list of examples for RN and DN

To build a Distingues Name, you need to refer to each of the Previous Relative Names.

Visore is a feature that is similar to Developer Sandbox, which is enabled along with NX-API, is
used to navigate the Object model, while exposing NX-API REST calls. To access this hosted
webpage, simply issue http(s)://<switch-ip/name>/visore.html

To search for a class, you can use Visore, for example, you could search for a Class VLAN,
which would display a list of all the VLANs that are configured.
You can also use Visore to search for an specific object (by using the Distinguished name)

Just like in NX-API cli, you can leverage the capabilities of Postman.

As you are aware, URIs are everything for API calls, and here is a way to construct the URIs for
NX-API REST

We start as always with https or http, but then we have the following fields:

● Api - The main entry point for NX-API REST queries


● Query type
○ Class: Which allows to search for a whole class
○ Mo: Is the identifier to search for an specific object
● Identifier: Class name or distinguished name, will be related to the previous field
● Format: Identify XML or JSON as a type of content, will be used instead of HTTP
Headers
● ?<Query Params>: Optional Parameters that impact returned results

Of course, by using visore, it can construct the URI that you can query.
Cisco DNA Center APIs and Programmability
Cisco DNA Center is built on top of the Intent-Based Infrastructure, which is where all wired and
wireless devices are connected to

The platform capabilities of Cisco DNA center, allow to integrate the solution with not Only Cisco
appliances but Third party products.
The platform capabilities of Cisco DNA center allow North, South, East and West access to the
toolset.


● Northbound capabilities include Business and Network Intent APIs, giving access to
applications and tools inside of Cisco DNA center
● Westbound, we can use adapters that can leverage IT and System tools
● Cisco DNA controller can also integrate with other Cisco Solutions using X-Domain
Integration
● Southbound we have the 3rd Party SDKs, where you can bring third party licences to our
Cisco DNA center

Aside from the applications, Cisco DNA center offer a set of tools which grants the Administrator
access to the data of DNA center

● Discovery: Automate additions of devices to controller inventory


● Inventory: Adds, updates or deletes devices that are managed by the controller
● Topology: Visualize how devices are interconnected
● Image Repository: Download and manage virtual and physical software images
● Command Runner: Allows to run CLI on or more devices
● License Manager: Visualize and manage license usage
● Template Editor: An interactive editor to author CLI templates
● Network Plug and Play: A simple and secure approach to provision networks with a near
zero touch experience
● Telemetry: Telemetry Design and Provision
● Data and Reports: Access Data Sets, Schedule Data Extracts for Download in multiple
formats.
Northbound REST API

Accessible from the Platform application, which will access to API catalog

There are a lot of APIs, so you can leverage the search function to find the most appropriate.
Once found, you can find each API documentation, providing Request parameters and
response codes based on the model, of course, before using any API, you should Authenticate,
using API authentication.

To authenticate, you will use an HTTP Basic Authentication POST request, which will retrieve a
“Token”, which has to be sent as “x-auth-token” header with other requests.

Network Control in the Cloud with Cisco Meraki

Meraki Cloud managed IT portfolio is the foundation, as Meraki offers Wireless LAN, Ethernet
Switches, Security Appliances, Device management, Voice communications and even security
features such as Security cameras, all of this managed from the cloud. All of this can be
configured and managed from the Meraki GUI, however, Meraki offers 3 API in its portfolio

● Dashboard API: Configuration and operational information


● Scanning API: Track information of who is in the network or next to your network for
context sensitive information
● Captive portal API: Building onboarding resources
Meraki Dashboard API is leveraged by Service Providers for managing and provision customers
and customer devices, allowing the following features:

● Provision customers
● Allocate customer’s devices
● Assign network templates
● Configure access for monitoring
● Add administrator accounts for NOC staff and/or end customer access

Enterprise use the Dashboard API due to the following reasons:

● Automation of large deployment projects, e.g. branches


● Teleworker on and off-boarding
● Configure thousands of networks in minutes using templates
● Build your own custom dashboards with limited access

In order to enable the API in Meraki Dashboard, go to Organization - Settings - Enable API
Access, then, you need to create an API key, which can be done via My Profile - Generate API
Key.
Collaborative Software Development
Version control systems are tools that help manage changes of files over time, while enabling
efficient collaboration for those people contributing to a project.
Generally, software development is a collaborative process that requires:
● Using and contributing to software libraries from other developers
● Tracking bugs and feature requests from users
● Assisting operations and testing teams
● Multiple developers working as a team
Version control software keeps track of every modification on the code, if a mistake is made,
developers can look back and compare earlier versions of code to help fix the mistake.
Version control can be useful not only for developers but for networking teams, for example to
keep track of all the configurations that are active on networking devices

A brief introduction to Git


Git is used to manage files on your computer, so that when you make changes, you see that
there is a new version of the file and can compare old and new. This is called Version control

Version Control with Git


Created by Linus Torvalds in 2005, is an example of a version control system. As the system is
distributed, it provides the capability for every team member to interact with each other. While a
central repository can be used, every member has their own copy of the Project files. This copy
is a complete copy of the full project, as changes made by one developer might be incompatible
with changes made by another, Git helps to track every individual change by each contributor.

In Git, you have full control of your local repository, so you may decide to commit and push only
some parts of your work and leave some others on your local system.

If you ever had any of these questions, then you are in the need of Version Control Systems

● How do I make incremental changes and share my work with others?


● How do I go back to the “X” version of this file?
● What changed between version “x” and “y” of a file?
● People have been making changes to the same file (or set of files), how do I reconcile
and merge all these changes?
Version Control Systems are powerful tools that help you share files, track changes and
manage changes and contributions from authors and contributors.

Git Architecture is composed of several different components:


● Repository: A vault for storing version-controlled files and data, on your endpoint, a
repository will look like a regular folder or directory, there is one important difference, the
repository will have a hidden .git/ subdirectory. In here Git stores committed version
controlled files and other repository data
● Remote Repository: Where the files of the project reside, also, all the other local copies
are pulled, it can be stored on an internal private server or hosted on a public repository
such as GitHub, GitLab or BitBucket
● Local Repository: Is where snapshots or commits are stored on the local machine of
each individual
● Working Directory: Directory controlled by Git, differences between your working
directory and the local repository will be tracked by Git. You will have version-controlled
and un-versioned files and folders
● Versioned files: Files in your working directory that Git is tracking
● Un-Versioned files: Files in your working directory that you haven’t asked git to track
● Staging Area: Is where all the changes you want to perform are placed, you decide
which files Git should track
● Branches: Enable parallel work within a repository, they are created to split-off work
done by different people

When you commit changes to version controlled files, Git stores full copies of all the
changed files, it also stores a tree which contains links to all the changed files. Git
computes a SHA1 of all stored files and uses the commit hashes to uniquely refer to
individual commits.
The architecture can be best defined as following:

Git Commands
One of the most important elements of Git is a repository, A repository is a directory that is
initialized with Git. A repository can contain anything such as codes, images and any other
types of files.

Git init is the first command you need to run, as it creates a new project to work with Git and will
allow Git to start tracking files. Make sure to initialize a working directory in the top directory that
has all the necessary files, otherwise, Git will only track a subset of the files where you ran the
command.

Git init has to be configured with a username and an email address, this information is used in
the Git history and during commits allowing to see who made the changes. When executing git
init, Git creates a subdirectory called .git where all the snapshots and other metadata are
contained.
To configure a Git username and email address, use the git config, to see all available options
that can be configured in git configure, use man git config.
Git status allows you to see the status of your project, it shows the files that need to be staged,
which branch are you on and if a commit is required. It will show as well the files that are not
being tracked by Git.
Git add command add files into the Git project. It performs 2 primary functions
● Starts tracking files
● Adds files to the staging area
Git rm removes a file from the Git project
Git commit commits the staged changes, this creates a point-in-time local snapshot
Git remote add allows you to add remote repositories, which is used to store your local
changes into the remote repository. To check which remote repositories are configured for a
specific project, issue the git remote command.
Git push is used to send your snapshots to a remote repository.
Git pull retrieves snapshots from other participants
Git fetch fetches changes from the remote repository and stores them on your local repository.
The main difference between Pull and Fetch is that Fetch will store the changes locally but pull
will merge the changes in your current branch.

The most common commands for Git are:

Git stash saves temporarily your current changes, while git stash pop will retrieve your
previously saved changes.
Git reset HEAD~1 will remove commit from your local repository and move the changes to your
working directory.
Git clone allows to copy a pre-existing project
Git revert allows you to revert any change applied, not the last one
Git log --oneline view commit hashes, is used in conjunction with git revert
We will work on creating a new Git Repository
You can create a new directory to start working on Git, remember that in here will be a hidden
file called .git

Now, you access the directory and can initiate the repository with Git init, this will create the
hidden file

You can issue a git status command to see the current branch and if there are changes to be
committed

Then, you can start working on the project, let’s say you want to create a text file and edit it

If you issue another git status command, you will see Git tells you there is a change

Git also tells you that there are untracked files, and to use the add command. Let’s do that and
commit
If you issue git log, you will see a list of all the changes, since you have only done 1, you
should only see one

You can even see who made the change

Let’s make another change to the file, once done, issue git status to check if the change got
applied

If you use add and commit, you will see a new change in the logs
Now, if you have initiated a remote repository, you can push your changes to the remote repo.
To do so, use the command git push [remote repo] [local branch]

Go to github.com and see your changes

Git Workflow
A workflow usually consists of the following actions
● Directory initialization for Git
● Adding a file
● Commiting a file to a local repo
● Pushing a file to a remote repo
● Updating a local repo by pulling changes from a remote repo

Here is a brief example of a workflow


If you are planning on work with someone else’s code, you first have to get into your machine,
to do so, use the Git clone command (git clone https://github.com/CiscoDevNet/dne-dna-code),
by doing this, your endpoint connected to the URL provided and cloned a full copy of the remote
repository to your machine, from the first commit to the last.

Git status helps you to verify in which branch are you currently working on, and the current
status of your working tree.
Now you have a workflow
1. Download (clone) a local copy of a remote repository to your workstation
2. Create a “Safe place” (branch) to make edits
3. Save (commit) your incremental changes

Branching with Git


Git allows you to create different instances of your repository called branches, by default there is
only one called Master, for example, in your automation code, you have a master branch which
will correspond to the working code, master branch should only contain code that is stable.

The following workflow should be used in Git to be apply fixes to your code and develop new
features:
● Never develop or apply fixes to the master branch
● Always create separate branches for a new feature or bug fix
● Apply frequent merges of the master branch to your feature or bug fix branch to avoid
merge conflicts
● When a fix is ready merge it back to the master branch
The following is a graphic representation of the workflow

There is a master branch (production), but an urgent fix is required so we create a Branch called
Bugfix, which then is merged back to Production, the same happens to the OS upgrade branch,
which will also include a merge from the bugfix branch.
Codes applied in different branches does not affect the master branch until you merge the
changes.

The following are the most commonly used commands:


● Git branch allows you to manipulate branches, using the command without additional
arguments will list all the branches in your local repository, you can create a new branch
by using command git branch branch_name, or if you issue a -D, you can delete a
branch.
● Git checkout, allows you to navigate through branches, only issue the name after the
command. Git checkout followed by a -b allows to create a branch and automatically
navigate to it
● Git push, sends your snapshots to the remote repository
● Git status, helps you to verify which changes were applied
● Git merge, merges from the fix branch to the master branch
● Git push origin master allows to push the code to the remote repository
If you started editing files on the master branch and committing your changes, your local
repository would be out of sync with the remote server. If someone else commits some changes
to the master branch, and pushes their changes to the server before you push yours. You will
have to merge (reconcile) their changes with your own before you can push your changes to the
server.

Git checkout -b branch_name can be issued to create a new branch and navigate to it.

When you create a branch on your local repository, you create a safe place for you to make
changes to the repo. Any changes you make and commit are committed locally to this new
branch.
Git fetch helps you to update your local repo with the updates from the remote repository,
please note that if you have recently cloned your repo, there might not be any updates for you to
retrieve

It is time to make some changes to the files on your local repo, navigate to one of the test files
and make a change. Once finished, use the commands:
Git status (git will tell you that file was changed), note Git also tells you that the change
occurred on your branch

Git diff, allows you to see what changed on the file, It is highlighted with a + sign
It is highlighted with a + sign
There will be occasions where you need to revert a file to the “last known good state”, you can
checkout the file to overwrite the changes and restore the file to its last committed version

You can also reset your working directory to the last commit. Note that you will lose all changes
you have made since your last commit

If you even want to delete the working branch, use git branch --delete --force <branch-name>

Pull requests can be used with branches or forks.


You can issue the command git push [origin] [local-branch]
Handling Merge Conflicts
Merge conflicts can arise in the following scenarios:
● Two developers changed the same line or lines in the same file
● One developer deleted the file while other changed it
● One developer moved the file to another directory, while other developer changed it
● Both developers moved the file to different directories
Here is a common example of a Merge conflict message

When facing merging conflicts that Git is not able to resolve on its own is up to you to fix them
manually.
Fortunately, Git allows you to view who made the changes, however, you need to first tell Git
who you are.
Use the following commands to configure your user name and email
Git config --global user.name “Your Name”
Git config --global user.email “email-address”

Committing is a two-step process


Git Add: Stage files to be added
Git Commit: Commit the files in the repository with a commit message. You can issue a -m to
add a small message for committing, without adding, Git will open a default text editor (VIM)

Git diff Command


You can use the Git diff command to view the differences of files in your working directory
(meaning you can comparte the same files in your staging area or in your local repository), you
can also use git diff to show differences between two commits, branches or tags.
Git diff allows you to see changes between your staging area and the working directory, a + sign
refers to the currently made changes, while a - sign refers to the state in your current directory.
The following is an example

The -AAA configuration line would be the actual state and the +New AAA configuration line
would be the changes you have performed

Git diff also has some other subcommands.

Git diff --cached allows to see changes between staging area and local repository
Git diff head will show changes between Working directory and local repository
Git log will show commit hashes
Code Review
Is a phase in software development that helps to identify bus and poor code practices, as well
as improve design and overall readability of the code.

Reasons for code Review:


● Identify bugs
● Improve code quality
● Get familiar with different parts of the object
● Learn something new

Source code might be reviewed by software engineers or developers, testers, or even people
who are experts in the area that the source code covers. In a typical environment, multiple
people are assigned as code reviewers. In addition to checking the syntax of the code, their
tasks are to verify that the code works and sufficient tests exist, and to improve the code if
possible.

Multiple tools exist for code review, including:


● Mailing patches like the Linux Kernel Repos
● Web Applications like GitHub (pull request),
○ GitLab (merge request
○ Gerrit (changeset)

A typical feature development or bug fixing workflow is:


● Create a new branch
● Commit code changes to the remote repository
● Create a pull request
● Perform automated tests
● Review code
● Accept code (merge)
● Perform additional automated tests

In a typical software development environment, pull request creation automatically triggers


additional checks that are made before the reviewers begging reviewing code, this tests or
checks include:
● Front-end code tests
● Back-end code tests
● Database migration tests
● Code style test
● Building a Docker image test
● Other tests

Running code tests is the most common step after creating new pull requests. If some test fails,
the pull request cannot be merged, so the developer needs to ensure that all tests pass.
Multiple tools such as Circle CI and Jenkins can be integrated with GitHub for automated test
execution.

Once all tests have passed, code can be merged into the destination branch (oftenly the Master
branch).

There are different ways of merging your branch to the destination branch
● Direct Merge: Using git merge command, which might not be a recommended option
● Merge using pull requests: Requires code review, which is the recommended option

When changes are pushed to a custom created branch on GitHub, GitHub automatically offers
you the ability to create a pull request. Github automatically tells you to create a pull request
when a new branch is pushed to a remote repository with a notification and a Compare & Pull
request button

If you click the button, it will redirect you to the appropriate page for creating a pull request in
GitHub

On the “Open a Pull Request” page, you have a lot of options, some are mandatory, some are
not.
Source and destination Branch
● Comment: Recommended to write a comment to notify reviewers about change
● Reviewers: One or more people should review the changes
● Assignees: One or more people should review or make the changes and perform the
merge.
Here is an example of a pull request in GitHub

On the same page, you will find information about commits you made in the source branch; all
code changes will be presented as well.
When a Pull request is created, GitHub will notify Assignees and Reviewers by email, there are
other tools that can be integrated to GitHub so they can be notified.

In a code review, reviewers and assignees will not only check the code visually, but they can
download it and test it locally.

When a certain reviewer or assignee finishes with the code review, there are multiple options to
submit a review:
● Comment: Submit general feedback but without approval or denial
● Approve: Approve changes done; the pull request can be merged
● Request changes: Deny changes
○ If any assignee or reviewer Request a change, you need to modify the code
before submitting another review

After a pull request is approved, code changes can be merged into the destination branch
Here is an example of a Review accepted, the pull request has been accepted and branches
have been merged.

Merging a pull request can be done in multiple ways and can be enabled or disabled for each
repository:
● Create a merge commit (default option): All commits from the source branch are
added to the destination branch in a merge commit
● Squash and merge: All commits from the source branch are squashed into a single
commit and merged into the destination branch
● Rebase and merge: All commits from the source branch are added individually to the
destination branch without a merge commit
Software Development Methodologies

The new paradigm that is being used when developing is called Agile Methodology and Lean
Process.
Software is transforming industries of all types, including transportation, media, retail and
hospitality.
Most engineers can open a text editor, write a code and run a script based upon that code, if a
group of developers use this approach, it could lead to errors as it does not have structure or
process and it would likely not meet user expectations. A more structured approach is needed.

The Software Development Lifecycle process is used by software process is used by software
creators to design, develop and test high-quality software products and applications, additionally
SDLC aims to meet and exceed customer expectations on budget and time.

There are several methodologies to implement SDLC.

● Prototyping: Designed to build an initial version of the product quickly in order to


understand better requirements and scope.
● Rapid app development: Puts less emphasis on process and less emphasis on planning,
all components are developed in parallel.
● Extreme programming: An extension of Agile (which will be described more in depth
later) with unit testing, code reviews, simplicity and customer communication taken to
the extreme.
Another SDLC methods are:

Waterfall
Is based on a linear sequence process, has been around since the 1950's and is rooted in the
manufacturing and construction industries.

➢ The waterfall model has a very structured approach, this method assumes that all
requirements can be gathered up front during the requirements phase. Once this stage
is complete, the process runs “downhill”.
➢ During the design phase, all the information obtained that is in the requirements and
analysis stage is used to build a new high-level design.
➢ Once the Design phase is completed, the systems analyst begins transforming the
design based on hardware and software specifications.
➢ Following the Development phase (coding), comes the Testing phase, in which
customers can use the product and it is maintained based upon customer feedback.

Waterfall works well for small, well defined projects, as all requirements are well known before
beginning. As it is a linear process, you can’t advance to the next step without finishing and
testing the current step.

Advantages of Waterfall:
● Design errors are highlighted before any code is written, saving time during
implementation phases.
● Good documentation is mandatory, the effort is useful for engineers in the later stage of
the process.
● It is easy to measure progress and set milestones
Disadvantages of Waterfall
● In the early stages it is hard to gather all possible requirements.
● It becomes very difficult and expensive to re-engineer the application.
● The product will only be shipped at the end of the chain, leaving no space for middle
testing.
Lean
The Lean philosophy provides the most efficient way possible to eliminate everything that is
useless. If you don’t need it, get rid of it. It is meant to minimize constraints (resources) while at
the same time producing consistent flow and profitability.

Lean methodology consists of 3 major points

Purpose: Which problems will be solved by the project?. Why does it solve the problem?, these
questions are often referred to as the Five Whys?
Process: How will the organization assess each major value stream to make sure that each step
is Valuable, Capable, Available, Adequate and Flexible?
People: The right people need to take responsibility and produce outcomes
Agile

Is a way to implement Lean Philosophy in the software development industry. It is primarily


based on the concept of short sprints, seeking to do as much as possible in a relatively short
time, and without losing a focus on value. Agile software development includes customer in the
software lifecycle by delivering software in very early stages to gain valuable feedback.

Scrum is an Agile project management methodology, although it can be considered more as a


framework for managing processes. Scrum is designed with the idea that not all the ideas are
fully understood at the beginning and may not be listed in the early stages of the development
process. Scrum places value on iteration and incremental software development.

The methodology lists all the requirements on a Product Backlog. These requirements are built
around user stories, or small features of the final application.
Scrum also recommends a daily scrum in which developers address what they did the day
before and what they are going to do today. At the end of a sprint, the team provides a
shippable product increment that can be delivered to the customer.
Agile has the same phases as Waterfall lifecycle but given the concept of sprints, all phases of
the life cycle are completed within each single sprint.
Although it is a more advanced way to work compared with older methodologies, it still has pros
and cons.

Advantages:
● Rapid and continuous delivery of software releases helps customers to better
understand what the final product will look like
● Thanks to Scrum, people's interaction is emphasized.
● Late project changes are more welcome.

Disadvantages:
● Lacks emphasis on good documentation
● Without complete requirement-gathering process, customer expectations can be unclear
at beginning
● Rapid development increases the chance for major design changes

Test Driven Development

Smaller parts of code are easier to test, and testing smaller parts of code can cover more edge
cases to detect and prevent bugs. Test Driven development (TDD) is a software development
methodology where you write the test code before the actual production code.
TDD ensures that use cases are tested, and source code has automated tests by using the test
first approach.

Development is done in iterations, where you do the following:


● Write tests
● Run the tests; they must fail
● Write source code
● Run all test, it must pass
● Refactor the code where necessary

At the beginning of software development, the team defines taks for what you want to archive in
the current iteration. By working on a specific task, you should have very accurate information
about what you need to achieve

● Which kind of functionality do you need to develop?


● What are the input parameters for the functionality?
● What is the expected result of the functionality?

Having the information, you should be able to create tests for expected and unexpected input
parameters. Tests that cover all possible input parameters, especially edge cases are crucial in
software development because most bugs arise from scenarios that were not covered during
the dev phase.
Writing these tests first has immediate benefits for development:

● Gives you a clear goal


● Shows specification omissions and ambiguities before writing code, avoiding potentially
costly rewrites
● Uncovers edge cases for you to address from the start
● Makes debugging easier and faster, because you can simply run the tests

The next step in the TDD approach is to develop code that implements required functionality,
here you will write code and run tests until all tests have passed. Do not write more code than
what is needed to achieve your objective.
When all tests pass, you can be confident that you have implemented the new functionality and
did not break any other parts of the system.
Now is time to refactor the code, perhaps some code that you have written is not optimal and
you need to modify it.

Refactoring the code before moving to the next task has multiple advantages:
● Better code structure
● Better code readability
● Better design
The TDD iteration is finished when code refactoring is done and all test pass, then you can
proceed to the next iteration.
Modular Software Design

When you develop software that delivers various services to end users, it is important that the
code is structured in a way that is readable, easy to maintain and reliable. When you do not
follow some good practices when growing the codebase for your services with new features,
you may come to a point where adding a new feature is a daunting task for any developer,
including your future self.

You should strive to develop a software system that has a clear communication path between
different components. Such systems are easier to reason about and are more sustainable.
Cloud-based applications that are meant to be flexible and scalable, supporting complexities
that networks introduce, will use a different approach to software design than for example, an
enterprise application that can be more predictable, with a simpler deployment model.

Cloud-based and other applications are often split into a suite of multiple smaller, independently
running components or services, all complementing each other to reach a common goal. Often,
these applications can be more predictable. Often, these applications communicate with
lightweight mechanisms such as Representational State Transfer (REST) Application
Programming Interface (API).

This type of architecture is widely known as microservices. With microservices, each of the
components can be managed separately. This means that change cycles are not tightly coupled
together, which enables developers to introduce changes and deliver each component
individually, without having to rebuild and redeploy the entire system. Microservices also enable
independent scaling of parts of the system that require more resources, instead of scaling the
entire application.

The following image shows an example of microservice architecture with multiple isolated
services.

Although microservices architecture benefits are obvious, not all applications will be using said
architecture. Sometimes, an application that runs a single logical executable unit is simpler to
develop and will suffice for a particular use case. Such application development architectures
are referred to as monoliths.
Because monolithic application is typically developed as a single executable component, it also
needs to be scaled as a single component. Even when this component contains multiple smaller
pieces that represent the entire logic, you cannot scale the critical parts as you would do in
microservices architecture.

Different techniques and good practices emerged to cope with such problems and to make the
code more elegant and efficient in a monolithic design.

Code designs that will be discussed further can be used in monolithic and microservices
architecture for maintaining a clean, well-defined, and modular codebase.

Functions

With functions, you can make order in your code by dividing it into blocks of reusable chunks
that are used to perform a single, related task. Many programming languages come with built-in
functions that are always available to programmers.

Functions are defined with special keywords and can take parameters or arguments that can be
passed when invoking a function. Arguments are used inside the execution block for parts that
need some input for further processing. They are optional and can be skipped in the
definition of a function. Inside functions, it is possible to invoke other functions that might
complement the task that your function is trying to make.

To stop the function execution, a return statement can be used. Return will exit the function and
make sure that the program execution continues from the function caller onward. The return
statement can contain a value that is returned to the caller and can be used for further
processing.
Look at the following example

The function defined is is_ipv4_address


The return value will give a Boolean

Look at the following example

You can see that the variable Datacenter is used inside and outside the function
generate_device_name, variables defined inside a function are not visible from the outside,
this means both variables will have different scope.

You can avoid repetitive code by capturing code intent into functions and making calls to them
when their actions are needed. You can call the same functions multiple times with different
parameters.
Modules

With modules, the source code for a certain feature should be separated in some way from the
rest of the application code and then used together with the rest of the code in the run time.

Modules are about encapsulating functionality and constraining how different parts of your
application interact.
An application should be broken into modules, small enough that a developer can reason about
module function.

In the following example, it is importing 2 Python modules into a single module, which then
references the functions from the two separate modules.

Modules usually contain functions, classes, global variables, and different statements that can
be used to initialize a module. They should ideally be developed with no or few dependencies
on other modules, but most of the time, they are not completely independent. Modules will
invoke functions from other modules, and design decisions in one module must sometimes be
known to other modules.

More or less all modern popular languages formally support the module concept. The syntax of
the language differs.
As an example, you can create a module with the 2 functions that were created before, save
them as Toolbox.py and then you can import the module into another code.

Import the module using the filename (without the extension)

The major advantage of using modules in software development is that it allows one module to
be developed with little knowledge of the implementation in another module.
Modules bring flexibility because one model can be changed entirely without affecting others.
Essentially, the design of the whole system can be better understood because of modular
structure.
Classes and Methods
Class is a construct used in an Object-Oriented Programming language, Objects are basically
records or instances that allow you to carry data with them and execute defined actions on
them.

Class is a formal description of an object that you want to create. It will contain parameters for
holding data and methods that will enable interaction with the object and execution of defined
actions.

Often classes reside in their own files

The method _ _ init _ _ is a special initialization method that is, if defined, called on every object
creation. This method is generally known as a constructor and is usually used for initialization of
object data when creating a new object.
The variable self represents the instance of the object itself and is used for accessing the data
and methods of an object. In other languages, the Keyword “this” is used instead.
Besides the _ _ init _ _ method, there are many more built-in methods that you can define inside
your class. In Python, these are known as magic methods. With them, you can control how
objects behave when you interact with them.
● _ _ str _ _: Controls how an object is displayed when you print it
● _ _ lt _ _, _ _ gt _ _, _ _ eq _ _: you can write custom sorting procedures
● _ _ add _ _ specify how the addition of two objects using the “+” operator

Objects usually carry some data with them, so add an option to give the device object a
hostname and a message of the day

The class Device now accepts one parameter which is hostname. The variable motd can be
changed after the object is created. After the object initialization, hostname can also be
changed.
Methods in Python are very similar to functions. The difference is that methods are part of
classes and they need to define the self parameter. This parameter will be used inside methods
to access all data of the current object instance and other methods if necessary.

Add a show() method to the device class, which will enable you to print the current
configuration of a device object

As before, on the class Device, we gave the option to add a hostname, the variable motd can
be modified later.
When printing the configuration using the show() method, you get the following output

Here, we are initializing the class Device 2 times, giving each object a different name and giving
the variable motd of the object dev2 the argument “Welcome!”
As we haven’t given an attribute for “interface” and we gave the instruction to show the
argument “no attribute”.

Class Inheritance

Generally, all OOP languages support inheritance, a mechanism for deriving new classes from
existing classes.
Inheritance allows you to create new child classes while inheriting all the parameters of the so-
called parent class.
Extending the previous example with a Router class, which will inherit all the values from the
device class.

As stated above, the keyword pass skips any custom implementation, it also allows to leave the
implementation empty
Modular Design Benefits
Too many interactions between different modules can lead to confusion and unwanted side
effects when something needs to be changed in a module. Responsibility of a module needs to
be transparent so that you can reasonably know what its function is.

A change in one part of the application code should not affect or break other parts of the
system. To enable independent evolvement of modules, they need to have well-defined
interfaces that do not change.

Interfaces can of course be changed without affecting modules that depend on it.

Here are some design guidelines to consider:


● Acyclic dependencies principle
● Stable dependencies principle
● Single responsibility principle

The Acyclic dependency principle ensures that when you split your monolithic application into
multiple modules, these modules-and the classes accompanying them- have dependencies in
one direction only, If there are cyclic dependencies, where modules or classes are dependant in
both directions, changes in module A can lead to changes in Module B, but changes in module
B could led to unexpected behavior in Module A. In large and complex systems, these kinds of
cyclic dependencies are harder to detect and often lead to code bugs. It is also not possible to
separately reuse or test modules.

If this kind of dependency occurs, there are strategies to break the cyclic dependency chain.
High level modules that consist of complex logic should be reusable and not be affected by the
changes in the low level modules that provide you with application specifics.
There are strategies like Dependency Inversion, which can be defined as:
● High level modules should not depend on low level modules. Both should depend on
abstractions.
● Abstractions should not depend on details. Details should depend on Abstractions
There is a difference in how to implement this approach between statically typed languages like
C# or Java and dynamically typed languages like Python or Ruby.
Statically typed languages typically support the definition of an interface.

An interface is an abstraction that defines the skeleton code that needs to be extended in your
other custom classes.
Developing against predefined interface abstractions promotes reusability and provides a stable
bond with other modules, you also benefit from easier changes of the implementation and more
flexible testability of your code.

In dynamically typed languages no explicit interface is defined, here you would use duck typing,
which means appropriateness of an object is not determined by its type but rather by the
presence of properties and methods. Interfaces are defined implicitly with adding new methods
and properties to the modules or classes.

We are about to see an example of a cyclic dependency


● Where the app module uses the database module for setting up the database
● The database module uses the init module for initializing database data.
● In return, the init module calls the app module runTest() method that checks if the app
can run.

In theory, you need to decide in which direction you want the dependency to progress.

Stable Dependencies Principle states: The heuristic is that frequently changing, unstable
modules can depend on modules that do not change frequently and are as such more stable but
they should not depend on each other in the other direction.
In the app.py module we will use the Class APP and define 2 functions and 1 method
1. _ _ init _ _(self)
2. startProgram(self)
3. runTest(self)
We will also import the module db

Database module (db.py) is as following


● From module init we are importing class initialization
● We are defining 2 functions
○ _ _ init _ _ (self)
○ setupDB(self)
At last we have the init module

● App class will be imported


● Here we are adding data to init in the form of a Python List.
● The Class Initialization will have the following functions
○ _ _ init _ _ (self), which will be calling App module
○ loadData (self), which will be using the function runTest against the DB class
and will provide an output depending on the outcome.

It is possible to break the cyclic dependency between these three modules by extracting another
module, in this case named Validator

Since runTest method is in the module Validator, App class will no longer be implementing it,
so module init does not reference it anymore
With this, cyclic dependency is now broken by splitting the logic into separate modules with a
more stable interface, so that other modules can rely on using it.

Python supports explicit abstractions using the Abstract Base Class (ABC) module, which
allows you to develop abstraction methods that are closer to the statically typed languages.

Modules with stable interfaces are also more plausible candidates for moving to a separate
code repository, so they can be reused in other applications.

When you develop software modules, group the things that change for the same reasons, and
separate those that change for different reasons.
Modules, classes and functions are tools that should reduce complexity of your application and
increase reusability. Sometimes, there is a thin line between modular, readable code and code
that is getting too complex.
Loose Coupling

Means reducing dependencies of a module, class or function that uses different modules,
classes or functions directly. Loosely coupled systems tend to be easier to maintain and more
reusable. The opposite of Loose Coupling is Tighten Coupling, where all the objects mentioned
are more dependent on one another.

Reducing the dependencies between components of a system results in reducing the risk that
changes of one component will require you to change any other component.

In a Loosely Coupled system, the code that handles user interactions will not be dependent on
code that handles remote API calls. Your code will benefit from designing self-contained
components that have a well-defined purpose.

Coupling criteria can be defined by three parameters:


1. Size: The number of relations between modules, classes and functions
2. Visibility: Your solution should be obvious to other developers
3. Flexibility: It should be straightforward to change the interface from one module to the
other
Examine the following code

In here, there is just one dependency (Class device which calls Class interface). The function
takes one argument and there is no data hiding or global data modification, but what if you have
another class called “Routes” that also wants to add addresses to the database, but it does not
use the same concept of interfaces.
addressDB module expects to get an interface object from where it can read the address.
The add() function is preventing the use of the same function for the new Routes Class.
Instead of using def add (interface), try using def add (address)

Add() function now expects an address string that can be stored directly without traversing the
object first. Function is not tied to the object interface anymore.
However, it is responsibility of the caller to send the appropriate value to the function

The easier a module or function can call another one, the less tightly coupled it is.
Cohesion

It interprets classes, modules, and functions and defines if all of them aim for the same goal.
The purpose of a class, or module, should be focused on one thing and not too broad in its
actions. Modules that contain strongly related classes and functions can be considered to have
strong or high cohesion.

The goal is to make cohesion as strong as possible, because logically separated code blocks
will have a clearly defined purpose, making it easier for developers to remember the
functionality and intent of the code.
Architecture and Design Patterns

The concepts that defined what Object Oriented Programming enables are:
● Abstraction: Hide the logic implementation behind an interface. When defining an
abstraction, your objective is to expose a way to access the data without exposing
details of the implementation.
● Encapsulation: Conceals the internal state and the implementation of an object from
other objects. Can be used to restrict what can be accessed on an object.
● Inheritance: Capability of a class of inherit the definitions of a father class
● Polymorphism: When a Variable or method accepts more than one type of value or
parameter

Statically typed languages use abstract classes and interfaces to provide the explicit means of
defining abstraction. But in Dynamically Typed languages there are no special language
constructs, in Python, there is a library (ABC) which brings one step closer to the discipline of
statically typed languages and their definitions of abstract classes and interfaces.

Here is an example of how ABC can be used.

In OOP, different objects interact with each other in their runtime. One object can access the
data and methods of another object, regardless if the type of the object is the same or not. But
many times you will want some of the data and methods to stay private to the object so that the
object can use them internally, but other objects can’t access them.
In Python, encapsulation with hiding data from others is not so explicitly defined and can be
interpreted as a convention. You will use the notation of Prefixing the name with an underscore
(or two) to mark something as nonpublic data.

When using double underscore, name mangling occurs (encoding functions into unique names
so linkers can separate common names in the language). If you have a method __auditLog
and a device class, the name of the method becomes _device__auditLog.

It is important to understand more on how to define class interfaces, hierarchies, and


relationships between different modules and classes, decide which programming language to
use, which database to use, how to use it and how all pieces of this puzzle should communicate
together cohesively. All these questions fall into the architecture and design pattern paradigms.

Unified Modeling Language (UML)

When you are talking about software design, it is vital that you have a common language with all
stakeholders and developers on a project. Capturing the intent of software design, no matter the
implementation technology, is the goal of having a unified language that is simple enough for
everybody to understand.

The Unified Modeling Language was created because programming languages, or even
pseudocodes, are usually not at a high level of abstraction.
UML helps developers to create graphical notation of the programs that are being built. They
are especially useful for describing, or rather sketching, code written in object oriented style.

One example could be the following UML class diagram

In here, you can see 2 classes (Device and Router), Router class will inherit all the fields
and methods from the Device object. Inheritance is shown with a solid line and an arrow at
the end.

The UML can sketch your program before you start writing code for it. It can define many details
of a class and the connection between other classes. You can use UML as part of the
documentation for a program, or use it for reverse-engineering an existing application, to get a
better picture of how the system works.

Architectural Patterns

The architecture of an application is the overall organization of the system, and it has a broader
scope.

Architecture can be referred to as an abstraction of the entire system, with a focus on certain
details of the implementation. Architecture is concerned with the API or public side of the
system that carries the communication between components and it’s not concerned with
implementation details. Architecture is composed from multiple structures that include software
components and relations between them.

Architectures are composed to solve a specific problem. Some compositions happen to be more
useful than others, so they become documented as architecture patterns that people can refer
to.

The desired attributes of a system need to be recognized and considered when designing
architecture of a system. If your system needs to be highly secure, then you will have to decide
which elements of the system are critical and how you will limit the communication towards
them.
A decision on software architecture can be made while studying these characteristics of a
system:

● Performance
● Availability
● Modifiability
● Testability
● Usability
● Security

Some of the commonly known software architecture patterns_

● Layered or multi tier architecture pattern


● Event-driven architecture pattern
● Microservices architecture pattern
● Model-View-controller (MVC) architecture pattern
● Space-based architecture

Software Design Patterns

A good software architecture is important, but it’s not enough for establishing good quality of a
system. To ensure the best experience for all parties involved, the attributes, besides being well
designed, need also to be well implemented. Software design patterns will dive into separate
components and ensure that the optimal coding techniques and patterns are used in order to
avoid a highly coupled and tangled code.

Software design patterns provide solutions to commonly occurring obstacles in software design.
They are concepts for solving problems and not libraries that you would import to your code.
Design patterns unlike Algorithms do not define a clear set of actions but rather a high-level
definition of a solution, so the same pattern applied to different applications, can have different
code.
Software design patterns can reduce the time of development because they promote reusability.
Loosely coupled code is easier to reuse than tangled code.
The sections that are usually discussed with the design patterns are:

● Intent
● Motivation
● Applicability
● Structure in modeling language
● Implementation and sample code

The groups in which patterns are divided are


● Creational, which are the patterns concerned with the class or object creation
mechanisms
● Structural, which deals with class or object compositions for maintaining flexibility in
larger projects
● Behavioral, describes ways of interaction between classes or objects

The patterns follow many design principles.

As an example, observe the Singleton Pattern:


This pattern ensures that a class has only one instance while providing a global access point to
it. You expect to get the object instead of a new one once the object exists. Using this pattern,
you can provide global access to an object without having to store it in a Global variable.
Singleton Pattern enables access to an object from anywhere, and it also protects the object
from being overwritten (something that doesn’t happen on Global variables). The protection is
done by making the class constructor private and creating a static method that, if accessible by
your code, returns the original instance to the caller.
Singleton pattern can be observed in UML as following

If we convert that into Python code, it would look like

Class DataAccess can be instantiated only once.


The init constructor checks if there is already an existing instance (if there is, an error will raise),

Layered Architecture Pattern

Also known as multi tier or n-tier architecture pattern is one of the most common general
purpose architecture patterns. It relates to the organizational structure of most companies.
Software components in this pattern are formed in horizontal layers, where each layer performs
a specific role in the application, there is no limit on the amount of layers you can use, but there
are 4 typical layers.

● Presentation Layer - User interface communication


● Business layer - Business Rules Processing
● Persistence Layer - Data persistence Handling
● Database Layer Database Storage Technology
The responsibility of the presentation layer is to handle the logic for user interface
communication. Processes the input of the user, which is then passed down to other layers that
handle the request and return the results that are formatted to the user.

The business layer performs specific business rules based on the events that happen in the
system or requests that originate from the user. Any action that is considered as a part of
business functionality evolves in the business layer.

The persistence layer handles the requests for data in the database layer. When the business
layer needs to read or write information into the Database layer that performs the required
action.

Each layer handles its own domain of actions and they should not be concerned on how the
actions are performed in other layers. It is important to mention that actions that originate from a
top layer have to go to a bottom layer have to traverse all subsequent layers, you can’t go
directly to the bottom layer as it would violate the isolation principle.

When special layers need to be created you need to determine if the information will always
need to traverse that portion (Closed Type Layer), but if the information doesn’t have to always
pass through the special layer, it will be considered as Open Type Layer.

The layers in the layered architecture should have well defined APIs or interfaces over which
they communicate. This way, your system will be more loosely coupled and easier to maintain
and test.

Consider the following example

In the presentation layer, the overall behavior and user experience of ordering items is
developed. When a user executes an action in the Orders page it gets delegated to the Orders
Agent, which is listening to these events. The request goes to the Business layer which will
determine how to handle the request, if needed the business layer can pass the information
over to the Persistence layer which will determine if it has to read or write information on the DB.

MVC Architecture Pattern

Model-View-Controller Architecture Pattern is one of the most known and used architectural
patterns in programming language as it played an influential role in mainly user interface
frameworks, it is still relevant and used in interactive desktop and web applications today.
A framework provides a skeleton code for building an application that can be extended
by the programmer to create a specific implementation. Usually comes with already
implemented handlers for user authentication, database connection and frequently used
actions.

MVC describes the implementation of software components in terms of their responsibilities. It


introduces three object roles. These three roles enable the separation of concern and
independent development of each.

Model: Defines user data and encapsulate what that information is


View: Renders data on the display
Controller: Any change made on the View is handled by the Controller component. Controller
component can report to Model component to make any needed change

The dependencies between the components govern the behavior of the system.

The View depends on the model and obtains data from it, allowing it to develop the model
component without knowing what will be in the View component. A model component should be
independent of the presentation output and input behavior.

View Component makes development easier, allowing you to have multiple views for different
patterns

The Controller sits between the model and the view and it depends on both. Controllers receive
inputs from the interaction with the view component, which are translated into requests to the
model component.
In MVC, the view component is the entry point for a user accessing an application, it has the
implementation on how the data is presented to the user, often referred to as Front End

The controller and model component are developed using different sets of technologies. In their
case they are referred to as Back-end. The separation between the components enables you to
program the code in different programming languages and technologies, as long as you are
capable of connecting everything together using the desired tools.

The tasks of the controller component are to accept the input and translate that to a request
either to the model or view.

The model defines the state of an object in the application and implements the functions for
accessing the data of modeled objects

Consider the flow of requests in the following diagram.

A user interacting with the application view initiates an event that the controller receives as an
Input (1). The controller receives the request that the user initiated and interprets it against the
set of rules and procedures that are defined inside the controller for that view. The controller
sends the state change request to the model component, which should manipulate the model
data (2).

After the requests are processed, the controller can ask the view component to change the
presentation (3). The model component has to interpret the requests coming from the controller
and store the new state

As a consequence of the state change, a change-propagation mechanism is initiated to inform


the observers of the model that they should update their presentation based on this new state(4)
After the notification from the model, the view component will start the update process and
request the new state directly from the model (5). After the response of the model, the view is
redrawn using the new data. The view component can also request the model for the state after
the controller requested a change on the view.
The MVC architectural pattern is composed from multiple software design patterns that maintain
the relationship between the components and solve problems that MVC introduces by proposing
some of its features.

Common design patterns used in MVC


● Observer pattern
● Strategy pattern
● Composite pattern
● Factory method
● Adapter pattern

MVC lets you change how a view responds to user input without having to change or develop
another view component.

The relationship between the view and a controller is an example of the strategy design pattern.
This behavioral strategy design pattern suggests taking a class that has many different related
implementations and deriving all the implementations into separate classes, which are then
called strategies.

When your user interface can be combined hierarchically using different elements, such as
nested frames and buttons that can be represented as a tree, then you are probably using a
structural composite design pattern.

There is an issue with MVC, as you can have multiple views that use the same model in your
application, so you want to make sure that all active views get updated on a model state
change. A change propagation mechanism is initiated to inform all participants that the state has
changed, meaning that the state changes when the model notifies the observers (the views)
In this partial UML diagram showing the model component as a class implementation, you can
see the required state and method fields.

The model stores data of some sort (or can query it), and also has a list of all the observers that
are subscribed to changes that occur in the model. It implements methods for attaching and
detaching new observers and a method for notifying them on a state change

As the next figure shows, an observable, in this case the user model, will notify all observers on
a state change. If a new observer is required it should be easy to attach it to the group of
observers.
As all models and methods, MVC has benefits and drawbacks that need to be pointed out

Benefits of the MVC pattern

● Separation of Concern - component based development, each component performs its


role.
○ View, takes care of the presentation side of the application
○ Model, defines the state of the application
○ Controller, governs the behavior of user actions against the view and the model
● Multiple views of the same model
● Flexible presentation changes
● Independent testing of components
● Pluggable components

Downsides of the MVC pattern

● Increased complexity
● Excessive number of change notifications
● View and controller tight coupling
● Separate controller is sometimes not needed

A couple of variations of the MVC architectural pattern have been introduced, they propose
similarly to MVC but handle some things differently. Examples are Model-View-View-Model and
Model-View-Presenter.

One of the drawbacks of MVC pattern is its increased complexity that comes when there are
use cases that require nonoptimal implementation if you want to stay in the frame of MVC
definitions. For example, when your view has a form that can be explicitly enabled based on
some state in the model. The correct way of enabling the frame would be an event triggered
propagating to the model, which then will notify the observers to fetch the stated, and after that,
the form is enabled. Said process would be rather complex, because the MVC model does not
know about the views directly, it can’t propagate changes to the views when necessary.

Also, sometimes, there can be a lot of change propagations that do not benefit all the views that
use the observed model.

Implementing MVC
As you are aware, there are 3 components (Model - View - Controller) that you implement
together with connections between them. The important thing is to make sure these
components are not tightly coupled together. The view should be independent from the model;
any change done should not affect other views that relate to the same model.
You should start designing your program on a higher level, using UML class diagrams that can
specify how your program acts.

Take into consideration the following example:

The program has one view (UserView) that uses a Controller (UserController) interface and a
Model (UserModel). The controller is implemented with SimpleController class, which defines
create() and get() for working with users. The UserController talks to the UserModel to store
information, and the view contacts the model after the state has been changed.

MVC is not dependent on programming language.

MVC Frameworks

With access to patterns such as MVC, you might find yourself using and reusing ideas all the
time in different projects. Your implemented application has a nice structure and you feel
comfortable exporting it into other projects, this is what we call a Framework, or the skeleton of
an application which includes methods, functions and libraries.

When developing using the MVC model, there are many frameworks that provide you with
generic functionality and can be adopted by your own implementations. You should be aware
that using frameworks does not prevent you from writing tangled code, but the structure should
guide you to decrease such problems.
The oldest framework for MVC would be Smalltalk-80 written in the 80’s, however there are
more modern frameworks such as:

● ASP.NET MVC
● Django, Pyramid, web2py
● Symfony, Laravel, Zend Framework
● AngularJS, EmberJS, Wakanda
● Spring

Observer Design Pattern

Observer design patterns are behavioral patterns that define a one-to-many dependency
between objects, together with a subscription mechanism for informing subscribed objects on
changes happening on the object they are observing. The pattern is also known as Event-
Subscriber or Listener.

The fundamental elements in this pattern are Observable or Publisher and Observer or
subscriber. An observable can have any number of dependent observers. Observers are
notified whenever the subject goes through a State change. When a state change happens, the
observers that were notified contact the observable (The subject) to synchronize its state. The
Publisher is also known as the subject, and the observers can be referred as to Subscribers.
The publisher sends the notifications, but it does not need to know who the subscribers are or
how many of them are subscribed.

The idea of the observer pattern is that you add a subscription mechanism to the class from
which you want to generate change notifications. Other objects (Observers) will be able to
subscribe or end subscription for a certain stream of change notifications coming from this
publisher (Observable) class.

A notify procedure should be called in the observer pattern when a State change occurs due to
the application code triggering the change, also when a Publisher performs a change.
Cisco’s DevNet Study Guide

1. What does MVC stands for:


Model View Controller Model

2. How does MVC behave?


Controller takes input, make changes on the Model if necessary and informs the view to
update presentation accordingly

3. Mention the Top 10 Application Security Risks


Broken Authentication - Attackers gain access to an account
Broken Access Control - Attackers acting as admins or users with privileges
Cross Site Scripting - Remote code execution on Victim’s browser
Injection - Attacker can send hostile data to an interpreter
Insecure deserialization - Remote code execution attacks
Security Misconfiguration - Exploit unpatched flaws
Sensitive Data Exposure - Stealing sensitive information
Using components with known vulnerabilities
XML External Entities - Exploit vulnerable XML processors

4. On the Observer Design Pattern, what type of pattern do they follow and how do they
behave?
It’s a Behavioral Pattern, and Observers are informed on changes to the object they are
Observing

5. What is UML?
Unified Modelling Language

6. What are Architectural Patterns?


Architecture can be referred to as an abstraction of the entire system, with a focus on
certain details of the implementation. Architecture is concerned with the API or public
side of the system that carries the communication between components.

7. What are the differences between XML and JSON?


XML relies on TAGS while JSON relies on Key/Value format

8. What is VRL?
Virtual Internet Routing Lab

9. What are the Characteristics of VRL?


Cisco Network Simulation Platform running the same operative system as physical
Routers and Switches, offers a configuration engine that can build Cisco configuration in
one button
10. What type of authentication is used by WebEx teams?
Bearer Token

11. What type of authentication is used by RESTCONF/IOSXE?


Basic Auth

12. What type of authentication is used by NXOS/ACI?


Cookie -AAAuser

13. What type of authentication is used by Meraki?


X-Cisco-Meraki-API-Key

14. What type of authentication is used by the DNA Center?


-X auth token

15. What type of authentication is used by the UCS Director API?


API key in header

16. What type of authentication is used by the UCS Manager?


Cookie

17. What are the three API Constraints Mitigation Techniques?


● Pagination - Breaking large amount of data
● Rate limiting - Limits the rate of API requests
● Payload limiting - Limits the size of body request

18. What is the purpose of a Load Balancer?


Distributes network or application traffic across a number of servers

19. What is the purpose of a Firewall?


List of rules that dictates what packets are allowed to access or leave networks

20. What is the purpose of a Reverse Proxy?


Retrieve resources on behalf of a client from different servers, they are presented to the
client as if they originated from the reverse proxy.

21. What is the purpose of a NAT Gateway?


Hides an IP
22. Match the Protocol with their related terms
● Device configurations - NETCONF
● Very Simple - JSON RPC
● WSDL (Web Services Description Language) - SOAP
● HTTP/2 - gRPC (Google RPC)

23. What are the 4 layers of NETCONF?


Content Layer - Configuration and Notification Data
Operations Layer - Defines Protocol Operations (get-config, delete-config, etc)
Message Layer - Encodes Remote Procedure Calls
Secure Transport Layer - Ensures Secure and Reliable Transport

24. Which GIT command merges two branches?


Git Merge

25. Which GIT command is used to create or delete a Branch?


Git Branch and Git branch -d

26. Which Git command allows you to navigate between branches or create one and go
directly into it?
Git Checkout and Git checkout -b

27. Which GIT command asks git to include updates to a commit?


Git add

28. What is GIT diff and its icons?


Marks the differences between the commits, uses - for deleted items and + for added
items

29. Which Three statements about REST API are True?


Uses HTTP, is a stateless architectural type, better than SOAP for performance-driven
API

30. What are the three most common Data formats used for APIs?
XML, JSON and YAML

31. Which data format relies heavily in whitespaces to define Data structure (although they
are not required)?
YAML

32. What is the purpose of Deserialization (Parsing) in Python?


Allow Python to convert a file into a Python Object to properly use it
33. What is the purpose of Serialization in Python?
Allow Python to convert its object to Data that a system can read

34. Which two options describe the purpose of a revision control system such as GIT?
Track who made changes and Keeps historic versions of a file

35. In GIT, what is the remote location from which code can be pulled off?
Remote Repository

36. Which GIT command is used to initialize a project?


Git Init

37. Which GIT command is used to update your working directory with a fix of a colleague?
Git Pull

38. What is the difference between Python LOADS and Python LOAD?
Python Loads requires a string and Python Load a File Descriptor

39. What are the HTTP operations used in APIs?


Get - Reads
Post - Creates
Put - Modifies completely
Patch - Modifies Partially
Delete - Deletes a resource

40. Which Python XML parsing library produces dictionary objects most similar to the built-in
JSON parser?
XML to dict

41. What is the purpose of Namespaces?


Namespaces allow multiple elements in an object, with the same tag name, to represent
different concepts.

42. What are the three major points of the LEAN Philosophy?
People, Process, Purpose

43. What is a Sprint?


Short period of time where people focus on small but significant changes
44. What are the most common HTTP Status codes?
200 - Status Ok
201 - Resource created
301 - Resource got Moved or is no longer here
400 - Bad Request
401 - Unauthorized
403 - Forbidden
404 - Not found
500 - Internal Server error
503 - Service unavailable (hasn’t been implemented)

45. What are the steps of the Test Driven Methodology?


Write tests
Test fail
Write code
Test Pass
Refactor Code

46. Which Github feature is used to initiate a Code Review?


Pull requests

47. Which code construct can be used to efficiently split the workload between multiple
groups of software developers?
Modules

48. What is the optimal combination of Cohesion and coupling?


Strong Cohesion with Loose coupling

49. What is Cohesion?


Defines if Classes, Modules and Functions aim for the same goal

50. What is coupling?


Specifies the dependency of the modules

51. What does encapsulation in OOP mean?


Conceals methods and data with the purpose to hide the implementation and restrict
direct access to the Object data

52. What are the characteristics of Monolithic applications?


Components are not running independently on different servers

53. What type of pattern is Singleton?


Creational
54. What type of pattern is Facade?
Structural

55. In POSTMAN, which 2 elements can be seen on the Response Panel?


Status code, Formatted Response Body

56. In POSTMAN, what are collections?


Requests that share common values

57. In POSTMAN, what are Variables?


Used to store dynamic information, it is defined by using “{{}}” double curly bracket

58. In POSTMAN what are Environments?


A set of Variables and their values

59. With which values are variables populated when you share a Postman Collection?
With the Initial Values and Narrowest scope

60. How are Webhooks notifications transported to the users?


By using POST requests, when the client subscribes to a specific Webhook

61. What are Webhooks?


Oftenly called Reverse APIs, using the server to notify for changes Asynchronously

62. When using API or Webhooks, which Method is Synchrous and which Asynchronous?
APIs are Synchronous and Webhooks are Asynchronous

63. What is the difference between Client-side Rate limit and Payload limiting?
Client-side rate limit, you limit the Rate of API requests and Payload limiting, limits the
size of request bodies

64. How Frequently is a new token issued in custom token Authentication?


When the old one expires

65. What does hardcoding mean?


Specifying data in the code

66. What are the three Layers of Cisco DNA?


Network Element Layer - Physical and Virtual devices
Platform Layer - Controllers to Abstract the network
Network-enabled applications Layer - Supports business services
67. What is Cisco ACI?
A Data Center SDN Solution that allows applications to dynamically request data center
resources

68. What is Cisco APIC?


Unified Point of Automation and management of Cisco ACI

69. What are the three components of Cisco NSO?


Programmatic interface
Highly scalable, highly available Database
Device Abstraction Layer

70. Which Cisco Management platform offers the most flexibility in supporting different types
of solutions?
Cisco NSO

71. Where is Cisco Intersight hosted?


Cloud based

72. Which architecture does Cisco Intersight Uses?


REST APIs, are encapsulated through JSON

73. What is a UCS Manager?


Unifies the management of UCS Blade and Rack servers, UCS central, uses Manager
APIs to enable orchestration of multiple UCS domains

74. What is a UCS Central Software?


Manage multiple UCS domains while providing centralized access to inventory and
health status

75. Which tool provides a library of Microsoft Powershell cmdlets to manipulate Cisco UCS
manager objects?
UCS manager PowerTool Suite

76. What is Cisco Finesse?


Next generation agent and supervisor designed to provide a collaborative experience for
the communities that interact with your Customer Service Organization
77. What are the Finesse REST APIs?
● User - Represents an agent or supervisor
● Dialogue - Represents Dialogues (like voice calls)
● Queue - Represents a Queue or Skill group
● Teams - Represents a team
● Teams Resource - Represents a team configuration
● Client Log - Container element that holds client log data
● Task Routing - Provides a standard way to request, queue, route and handle third-
party multichannels
● Single sign on - Mechanism used to authenticate users
● Team message - Messages that the administrator can send to a team

78. Which API allows you to Configure Users and devices on the Cisco Unified
Communications Manager?
Administrative XML API

79. To which security category does Cisco Firepower belong?


Network Security

80. Which two options are advantages of Cisco SD-WAN over Cisco Meraki?
Enables traffic Segmentation and Uses traditional Cisco Infrastructure

81. What are the three Cisco Meraki APIs?


Dashboard API, Scanning API and Captive Portal API

82. Which Teams API can be used to list the participants of a Group space?
/Memberships

83. What are the most common Protocols, Ports, and their uses?
20/21 - FTP - TCP
22 - SSH - TCP
23 - Telnet - TCP
25 - SMTP - TCP
53 - DNS - TCP and UDP
69 - TFTP - UDP
80 - HTTP - TCP
443 - HTTPS - TCP
161/162 - SNMP - UDP
123 - NTP
830 - NETCONF - TCP

84. What is an API?


An API is a way for two pieces of software to talk to each other, API stands for
Application Programming Interface.
85. How do you create a Virtual environment in Linux
virtualenv name

86. How do you create a Virtual environment in Windows


python -m venv <name>

87. How do you activate a venv in Windows


name\scripts\activate

88. How do you activate a venv in Linux


name/bin/activate

89. In Python, how do you identify Lists, Tuples and Dicts?


Lists: []
Tuples: ()
Dicts: {}

90. What are the most common libraries for API Requests, NETCONF and Network CLI?
API - Requests
NETCONF - ncclient
Network CLI - netmiko

91. How do you enable the NXOS API?


Issuing feature nxapi

92. What are the contents of NX-API?


NX-API CLI and NX-API REST

93. What are the object names in NX-OS?


Relative names and Distinguished names

94. What are the most common locations in GIT?


Remote Repository
Local Repository
Working Directory
Staging Area

95. In GitHub, what are the commands that can be used to merge code?
Git merge
Pull requests
96. What are the steps in the Waterfall Design LifeCycle?
● Requirements/Analysis
● Design
● Coding
● Testing
● Maintenance

97. What are the layers of the Layered Architecture Pattern?


Presentation Layer
Business Layer
Persistence Layer
Database Layer

98. What are the HTTPS Server Status Codes?


400 - Bad Request
401 - Unauthorized
402 - Payment Required
403 - Forbidden
404 - Not Found

99. What are the most common HTTP Headers?


Content-type
Accept
Authorization
Date/Time stamp

100. In the HTTP Headers, what does Content-type stand for?


Specify the data format in the body

101. In the HTTP Headers, what does Accept stand for?


Specify the accepted response format

102. In the HTTP Headers, what does Authorization stand for?


The type of authorization that can be used, ranges from basic, token, cookie, bearer

103. In the HTTP Headers, what does Date/time stamp stand for?
Date/Time of the response or request

104. Describe the characteristics of NETCONF


Uses SSH, port 830, relies on XML, the verbs are get, get-config, edit-config, copy-config,
delete-config

105. Describe the characteristics of RESTCONF


Uses HTTP, port 80/443, the verbs are Get, Post, Patch, Put, Delete
106. Describe the characteristics of gRPC
Uses HTTP/2, port 9090, can use XML, JSON, Protobuf, Thrift.

107. On an architectural level, what is the difference between Virtual Machines and
Containers?
Virtual Machines have their apps, bins and libs running over a Guest OS, which runs over
the Hypervisor OS. While Containers have apps, bins and libs running over the container
Engine

108. What can be found on the Management Plane?


Exchange of management information

109. What can be found on the Control Plane?


Exchange of routing information

110. What can be found on the Data Plane?


Incoming Packets

111. What is the Agile Software Manifesto?


Individuals and interactions over Process and Tools
Working software over comprehensive documentation
Customer collaboration over Contract negotiation
Responding to change over Following a plan

You might also like