Skip to content
Snippets Groups Projects
Commit dbf75465 authored by confi-surya's avatar confi-surya Committed by Mark Goddard
Browse files

Following the new PTI for document build

For compliance with the Project Testing Interface [1]
as described in [2]

[1]
https://governance.openstack.org/tc/reference/project-testing-interface.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2017-December/125710.html

doc8 command is dropped from docs tox envs.
So this affect nothing and run in PEP8.

Related-Bug: #1765348

Depends-On: Icc7fe3a8f9716281de88825e9d5b2fd84de3d00a
Change-Id: Idf9a16111479ccc64004eac9508da575822a3df5
parent 5c1f0226
No related branches found
No related tags found
No related merge requests found
Showing
with 93 additions and 74 deletions
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Order matters to the pip dependency resolver, so sorting this file
# changes how packages are installed. New dependencies should be
# added in alphabetical order, however, some dependencies may need to
# be installed in a specific order.
openstackdocstheme>=1.18.1 # Apache-2.0
reno>=2.5.0 # Apache-2.0
sphinx!=1.6.6,>=1.6.2 # BSD
......@@ -26,7 +26,7 @@ For the combined option, set the two variables below, while allowing the
other two to accept their default values. In this configuration all REST
API requests, internal and external, will flow over the same network.
.. code-block:: none
.. code-block:: yaml
kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"
......@@ -37,7 +37,7 @@ For the separate option, set these four variables. In this configuration
the internal and external REST API requests can flow over separate
networks.
.. code-block:: none
.. code-block:: yaml
kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"
......@@ -57,7 +57,7 @@ in your kolla deployment use the variables:
- kolla_internal_fqdn
- kolla_external_fqdn
.. code-block:: none
.. code-block:: yaml
kolla_internal_fqdn: inside.mykolla.example.net
kolla_external_fqdn: mykolla.example.net
......@@ -95,7 +95,7 @@ The configuration variables that control TLS networking are:
The default for TLS is disabled, to enable TLS networking:
.. code-block:: none
.. code-block:: yaml
kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem"
......@@ -176,7 +176,7 @@ OpenStack Service Configuration in Kolla
An operator can change the location where custom config files are read from by
editing ``/etc/kolla/globals.yml`` and adding the following line.
.. code-block:: none
.. code-block:: yaml
# The directory to merge custom config files the kolla's config files
node_custom_config: "/etc/kolla/config"
......@@ -253,7 +253,7 @@ If a development environment doesn't have a free IP address available for VIP
configuration, the host's IP address may be used here by disabling HAProxy by
adding:
.. code-block:: none
.. code-block:: yaml
enable_haproxy: "no"
......@@ -269,7 +269,7 @@ External Elasticsearch/Kibana environment
It is possible to use an external Elasticsearch/Kibana environment. To do this
first disable the deployment of the central logging.
.. code-block:: none
.. code-block:: yaml
enable_central_logging: "no"
......@@ -285,7 +285,7 @@ It is sometimes required to use a different than default port
for service(s) in Kolla. It is possible with setting
``<service>_port`` in ``globals.yml`` file. For example:
.. code-block:: none
.. code-block:: yaml
database_port: 3307
......@@ -301,7 +301,7 @@ By default, Fluentd is used as a syslog server to collect Swift and HAProxy
logs. When Fluentd is disabled or you want to use an external syslog server,
You can set syslog parameters in ``globals.yml`` file. For example:
.. code-block:: none
.. code-block:: yaml
syslog_server: "172.29.9.145"
syslog_udp_port: "514"
......@@ -311,7 +311,7 @@ You can set syslog parameters in ``globals.yml`` file. For example:
You can also set syslog facility names for Swift and HAProxy logs.
By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
.. code-block:: none
.. code-block:: yaml
syslog_swift_facility: "local0"
syslog_haproxy_facility: "local1"
......
......@@ -87,7 +87,7 @@ that Kolla uses throughout that should be followed.
content:
.. path ansible/roles/common/templates/cron-logrotate-PROJECT.conf.j2
.. code-block:: none
.. code-block:: console
"/var/log/kolla/PROJECT/*.log"
{
......
......@@ -26,7 +26,7 @@ To enable dev mode for all supported services, set in
``/etc/kolla/globals.yml``:
.. path /etc/kolla/globals.yml
.. code-block:: none
.. code-block:: yaml
kolla_dev_mode: true
......@@ -35,7 +35,7 @@ To enable dev mode for all supported services, set in
To enable it just for heat, set:
.. path /etc/kolla/globals.yml
.. code-block:: none
.. code-block:: yaml
heat_dev_mode: true
......@@ -70,7 +70,7 @@ make sure it is installed in the container in question:
Then, set your breakpoint as follows:
.. code-block:: none
.. code-block:: python
from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace()
......
......@@ -91,7 +91,7 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example:
The following lines are desirable for IPv6 capable hosts:
.. code-block:: none
.. code-block:: console
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
......@@ -109,12 +109,14 @@ Build a Bifrost Container Image
This section provides instructions on how to build a container image for
bifrost using kolla.
Currently kolla only supports the ``source`` install type for the bifrost image.
Currently kolla only supports the ``source`` install type for the
bifrost image.
#. To generate kolla-build.conf configuration File
* If required, generate a default configuration file for :command:`kolla-build`:
* If required, generate a default configuration file for
:command:`kolla-build`:
.. code-block:: console
......
......@@ -95,7 +95,7 @@ In this output, look for the key ``X-Compute-Request-Id``. This is a unique
identifier that can be used to track the request through the system. An
example ID looks like this:
.. code-block:: none
.. code-block:: console
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
......
......@@ -99,10 +99,10 @@ To prepare the journal external drive execute the following command:
Configuration
~~~~~~~~~~~~~
Edit the ``[storage]`` group in the inventory which contains the hostname of the
hosts that have the block devices you have prepped as shown above.
Edit the ``[storage]`` group in the inventory which contains the hostname
of the hosts that have the block devices you have prepped as shown above.
.. code-block:: none
.. code-block:: ini
[storage]
controller
......@@ -340,7 +340,7 @@ implement caching.
Here is the top part of the multinode inventory file used in the example
environment before adding the 3rd node for Ceph:
.. code-block:: none
.. code-block:: ini
[control]
# These hostname must be resolvable from your deployment host
......@@ -384,7 +384,7 @@ Next, edit the multinode inventory file and make sure the 3 nodes are listed
under ``[storage]``. In this example I will add kolla3.ducourrier.com to the
existing inventory file:
.. code-block:: none
.. code-block:: ini
[control]
# These hostname must be resolvable from your deployment host
......
......@@ -38,7 +38,7 @@ During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback
system.
.. code-block:: none
.. code-block:: console
free_device=$(losetup -f)
fallocate -l 20G /var/lib/cinder_data.img
......@@ -67,7 +67,7 @@ NFS
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
where the volumes are to be stored:
.. code-block:: none
.. code-block:: console
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
......@@ -89,7 +89,7 @@ Then start ``nfsd``:
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
each storage node:
.. code-block:: none
.. code-block:: console
storage01:/kolla_nfs
storage02:/kolla_nfs
......
......@@ -103,7 +103,7 @@ Ceph) into the same directory, for example:
.. end
.. code-block:: none
.. code-block:: console
$ cat /etc/kolla/config/glance/ceph.client.glance.keyring
......
......@@ -183,8 +183,9 @@ all you need to do is the following steps:
.. end
#. Set the common password for all components within ``/etc/kolla/passwords.yml``.
In order to achieve that you could use the following command:
#. Set the common password for all components within
``/etc/kolla/passwords.yml``. In order to achieve that you
could use the following command:
.. code-block:: console
......
......@@ -116,7 +116,7 @@ be found on `Cloudbase website
Add the Hyper-V node in ``ansible/inventory`` file:
.. code-block:: none
.. code-block:: ini
[hyperv]
<HyperV IP>
......
......@@ -18,7 +18,7 @@ Preparation and Deployment
To allow Docker daemon connect to the etcd, add the following in the
``docker.service`` file.
.. code-block:: none
.. code-block:: ini
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
......
......@@ -369,7 +369,8 @@ Use the manila migration command, as shown in the following example:
Checking share migration progress
---------------------------------
Use the :command:`manila migration-get-progress shareID` command to check progress.
Use the :command:`manila migration-get-progress shareID` command to
check progress.
.. code-block:: console
......
......@@ -360,4 +360,4 @@ For more information about how to manage shares, see the
For more information about how HNAS driver works, see
`Hitachi NAS Platform File Services Driver for OpenStack
<https://docs.openstack.org/manila/latest/admin/hitachi_hnas_driver.html>`__.
\ No newline at end of file
<https://docs.openstack.org/manila/latest/admin/hitachi_hnas_driver.html>`__.
......@@ -4,9 +4,9 @@
Networking in Kolla
===================
Kolla deploys Neutron by default as OpenStack networking component. This section
describes configuring and running Neutron extensions like LBaaS, Networking-SFC,
QoS, and so on.
Kolla deploys Neutron by default as OpenStack networking component.
This section describes configuring and running Neutron extensions like
LBaaS, Networking-SFC, QoS, and so on.
Enabling Provider Networks
==========================
......@@ -218,7 +218,7 @@ it is advised to allocate them via the kernel command line instead to prevent
memory fragmentation. This can be achieved by adding the following to the grub
config and regenerating your grub file.
.. code-block:: none
.. code-block:: console
default_hugepagesz=2M hugepagesz=2M hugepages=25000
......@@ -233,16 +233,17 @@ While it is technically possible to use all 3 only ``uio_pci_generic`` and
and distributed as part of the dpdk library. While it has some advantages over
``uio_pci_generic`` loading the ``igb_uio`` module will taint the kernel and
possibly invalidate distro support. To successfully deploy ``ovs-dpdk``,
``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the platform.
Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of the default
kernel though on some distros you may need to install ``kernel-modules-extra`` or
the distro equivalent prior to running :command:`kolla-ansible deploy`.
``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the
platform. Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of
the default kernel though on some distros you may need to install
``kernel-modules-extra`` or the distro equivalent prior to running
:command:`kolla-ansible deploy`.
Installation
------------
To enable ovs-dpdk, add the following configuration to ``/etc/kolla/globals.yml``
file:
To enable ovs-dpdk, add the following configuration to
``/etc/kolla/globals.yml`` file:
.. code-block:: yaml
......@@ -308,9 +309,10 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
.. end
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add ``sriovnicswitch``
to the ``mechanism_drivers``. Also, the provider networks used by SRIOV should be configured.
Both flat and VLAN are configured with the same physical network name in this example:
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add
``sriovnicswitch`` to the ``mechanism_drivers``. Also, the provider
networks used by SRIOV should be configured. Both flat and VLAN are configured
with the same physical network name in this example:
.. path /etc/kolla/config/neutron/ml2_conf.ini
.. code-block:: ini
......@@ -331,9 +333,9 @@ Add ``PciPassthroughFilter`` to scheduler_default_filters
The ``PciPassthroughFilter``, which is required by Nova Scheduler service
on the Controller, should be added to ``scheduler_default_filters``
Modify the ``/etc/kolla/config/nova.conf`` file and add ``PciPassthroughFilter``
to ``scheduler_default_filters``. this filter is required by The Nova Scheduler
service on the controller node.
Modify the ``/etc/kolla/config/nova.conf`` file and add
``PciPassthroughFilter`` to ``scheduler_default_filters``. this filter is
required by The Nova Scheduler service on the controller node.
.. path /etc/kolla/config/nova.conf
.. code-block:: ini
......@@ -489,12 +491,12 @@ so in environments that have NICs with multiple ports configured for SRIOV,
it is impossible to specify a specific NIC port to pull VFs from.
Modify the file ``/etc/kolla/config/nova.conf``. The Nova Scheduler service
on the control node requires the ``PciPassthroughFilter`` to be added to the list
of filters and the Nova Compute service(s) on the compute node(s) need PCI
device whitelisting. The Nova API service on the control node and the Nova
on the control node requires the ``PciPassthroughFilter`` to be added to the
list of filters and the Nova Compute service(s) on the compute node(s) need
PCI device whitelisting. The Nova API service on the control node and the Nova
Compute service on the compute node also require the ``alias`` option under the
``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or 'type-PF'
to pass the PF. Type-VF is shown in this example:
``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or
'type-PF' to pass the PF. Type-VF is shown in this example:
.. path /etc/kolla/config/nova.conf
.. code-block:: ini
......@@ -514,8 +516,8 @@ Run deployment.
Verification
------------
Create (or use an existing) flavor, and then configure it to request one PCI device
from the PCI alias:
Create (or use an existing) flavor, and then configure it to request one PCI
device from the PCI alias:
.. code-block:: console
......@@ -534,4 +536,5 @@ Start a new instance using the flavor:
Verify VF devices were created and the instance starts successfully as in
the Neutron SRIOV case.
For more information see `OpenStack PCI passthrough documentation <https://docs.openstack.org/nova/pike/admin/pci-passthrough.html>`_.
\ No newline at end of file
For more information see `OpenStack PCI passthrough documentation <https://docs.openstack.org/nova/pike/admin/pci-passthrough.html>`_.
......@@ -5,10 +5,10 @@ Nova Fake Driver
================
One common question from OpenStack operators is that "how does the control
plane (for example, database, messaging queue, nova-scheduler ) scales?". To answer
this question, operators setup Rally to drive workload to the OpenStack cloud.
However, without a large number of nova-compute nodes, it becomes difficult to
exercise the control performance.
plane (for example, database, messaging queue, nova-scheduler ) scales?".
To answer this question, operators setup Rally to drive workload to the
OpenStack cloud. However, without a large number of nova-compute nodes,
it becomes difficult to exercise the control performance.
Given the built-in feature of Docker container, Kolla enables standing up many
of Compute nodes with nova fake driver on a single host. For example,
......@@ -19,9 +19,9 @@ Use nova-fake driver
~~~~~~~~~~~~~~~~~~~~
Nova fake driver can not work with all-in-one deployment. This is because the
fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container conflicts
with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore, in the
inventory the network node must be different than the Compute node.
fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container
conflicts with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore,
in the inventory the network node must be different than the Compute node.
By default, Kolla uses libvirt driver on the Compute node. To use nova-fake
driver, edit the following parameters in ``/etc/kolla/globals.yml`` or in
......@@ -35,5 +35,5 @@ the command line options.
.. end
Each Compute node will run 5 ``nova-compute`` containers and 5
``neutron-plugin-agent`` containers. When booting instance, there will be no real
instances created. But :command:`nova list` shows the fake instances.
``neutron-plugin-agent`` containers. When booting instance, there will be
no real instances created. But :command:`nova list` shows the fake instances.
......@@ -82,7 +82,7 @@ table** example listed above. Please modify accordingly if your setup is
different.
Prepare for Rings generating
----------------------------
----------------------------
To perpare for Swift Rings generating, run the following commands to initialize
the environment variable and create ``/etc/kolla/config/swift`` directory:
......@@ -251,4 +251,4 @@ A very basic smoke test:
| Bytes | 6684 |
| Containers | 1 |
| Objects | 1 |
+------------+---------------------------------------+
\ No newline at end of file
+------------+---------------------------------------+
......@@ -190,4 +190,4 @@ can be cleaned up executing ``cleanup-tacker`` script.
$ sh cleanup-tacker
.. end
\ No newline at end of file
.. end
......@@ -61,9 +61,9 @@ For more information, please see `VMware NSX-V documentation <https://docs.vmwar
In addition, it is important to modify the firewall rule of vSphere to make
sure that VNC is accessible from outside VMware environment.
On every VMware host, edit /etc/vmware/firewall/vnc.xml as below:
On every VMware host, edit ``/etc/vmware/firewall/vnc.xml`` as below:
.. code-block:: none
.. code-block:: xml
<!-- FirewallRule for VNC Console -->
<ConfigRoot>
......@@ -216,7 +216,8 @@ Options for Neutron NSX-V support:
.. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment.
Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment.
VMware NSX-DVS
......@@ -293,7 +294,8 @@ Options for Neutron NSX-DVS support:
.. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment.
Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment.
For more information on OpenStack vSphere, see
`VMware vSphere
......
......@@ -17,7 +17,7 @@ configure kuryr refer to :doc:`kuryr-guide`.
To allow Zun Compute connect to the Docker Daemon, add the following in the
``docker.service`` file on each zun-compute node.
.. code-block:: none
.. code-block:: ini
ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment