diff --git a/ansible/roles/baremetal/defaults/main.yml b/ansible/roles/baremetal/defaults/main.yml
index 859447834863560f15338de6446c483832485f70..4904b33caaf3f1caf2f66a8d128f62e996923d40 100644
--- a/ansible/roles/baremetal/defaults/main.yml
+++ b/ansible/roles/baremetal/defaults/main.yml
@@ -75,12 +75,16 @@ easy_install_available: >-
      not (ansible_distribution == 'Debian' and
           ansible_distribution_major_version is version(10, 'ge')) }}
 
+# Ubuntu 18+ bundles nfs-ganesha 2.6.0 with Ceph Mimic packages,
+# which does udp rpcbind test even with NFSv3 disabled - therefore
+# rpcbind needs to be installed, when Ceph NFS is enabled.
 debian_pkg_install:
  - "{{ docker_apt_package }}"
  - git
  - "{% if not easy_install_available %}python-pip{% endif %}"
  - python-setuptools
  - ntp
+ - "{% if enable_ceph_nfs|bool %}rpcbind{% endif %}"
 
 redhat_pkg_install:
  - epel-release
diff --git a/doc/source/reference/storage/ceph-guide.rst b/doc/source/reference/storage/ceph-guide.rst
index c538a6fdd9068bfe7796d98f7450e6b0c126694e..6ce522ce4ded02efbb90bacce2a2c385070018bd 100644
--- a/doc/source/reference/storage/ceph-guide.rst
+++ b/doc/source/reference/storage/ceph-guide.rst
@@ -118,7 +118,7 @@ are not mandatory.
 
 
 Using an external journal drive
--------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 .. note::
 
@@ -179,6 +179,9 @@ Enable Ceph in ``/etc/kolla/globals.yml``:
 
    enable_ceph: "yes"
 
+Ceph RADOS Gateway
+~~~~~~~~~~~~~~~~~~
+
 RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
 
 .. code-block:: yaml
@@ -195,27 +198,10 @@ You can enable RadosGW to be registered as Swift in Keystone catalog:
 
     By default RadosGW supports both Swift and S3 API, and it is not
     completely compatible with Swift API. The option `ceph_rgw_compatibility`
-    in ``ansible/group_vars/all.yml`` can enable/disable the RadosGW
+    in ``/etc/kolla/globals.yml`` can enable/disable the RadosGW
     compatibility with Swift API completely. After changing the value, run the
     "reconfigure“ command to enable.
 
-Configure the Ceph store type in ``ansible/group_vars/all.yml``, the default
-value is ``bluestore`` in Rocky:
-
-.. code-block:: yaml
-
-   ceph_osd_store_type: "bluestore"
-
-.. note::
-
-    Regarding number of placement groups (PGs)
-
-    Kolla sets very conservative values for the number of PGs per pool
-    (`ceph_pool_pg_num` and `ceph_pool_pgp_num`). This is in order to ensure
-    the majority of users will be able to deploy Ceph out of the box. It is
-    *highly* recommended to consult the official Ceph documentation regarding
-    these values before running Ceph in any kind of production scenario.
-
 RGW requires a healthy cluster in order to be successfully deployed. On initial
 start up, RGW will create several pools. The first pool should be in an
 operational state to proceed with the second one, and so on. So, in the case of
@@ -230,6 +216,48 @@ copies for the pools before deployment. Modify the file
    osd pool default size = 1
    osd pool default min size = 1
 
+NFS
+~~~
+
+NFS is an optional feature, you can enable it in ``/etc/kolla/globals.yml``:
+
+.. code-block:: yaml
+
+   enable_ceph_nfs: "yes"
+
+.. note::
+
+   If you are using Ubuntu, please enable Ceph NFS before using
+   ``kolla-ansible bootstrap-servers`` command - it will install required rpcbind
+   package.
+
+Store type
+~~~~~~~~~~
+
+Configure the Ceph store type in ``/etc/kolla/globals.yml``, the default
+value is ``bluestore`` in Rocky:
+
+.. code-block:: yaml
+
+   ceph_osd_store_type: "bluestore"
+
+Recommendations
+---------------
+
+Placement groups
+~~~~~~~~~~~~~~~~
+
+Regarding number of placement groups (PGs)
+
+Kolla sets very conservative values for the number of PGs per pool
+(`ceph_pool_pg_num` and `ceph_pool_pgp_num`). This is in order to ensure
+the majority of users will be able to deploy Ceph out of the box. It is
+*highly* recommended to consult the official Ceph documentation regarding
+these values before running Ceph in any kind of production scenario.
+
+Cluster Network
+~~~~~~~~~~~~~~~
+
 To build a high performance and secure Ceph Storage Cluster, the Ceph community
 recommend the use of two separate networks: public network and cluster network.
 Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``: