Cleaned up readme by relabelling sections:

- "Quick Start": docker-compose command plus published interfaces and how to access them. Clarified how to get docker internal network ip address
- "Configure Environment Variables": moved hadoop.env configuration to its own section for clarity
This commit is contained in:
Zed 2017-12-14 11:59:32 +08:00
parent 9af1f86bdb
commit fd1c6643b8

View File

@ -8,13 +8,25 @@ Version 1.1.0 introduces healthchecks for the containers.
* 2.7.1 with OpenJDK 7 * 2.7.1 with OpenJDK 7
* 2.7.1 with OpenJDK 8 * 2.7.1 with OpenJDK 8
## Description ## Quick Start
To deploy an example HDFS cluster, run: To deploy an example HDFS cluster, run:
``` ```
docker-compose up docker-compose up
``` ```
`docker-compose` creates a docker network that can be found by running `docker network list`, e.g. `dockerhadoop_default`.
Run `docker network inpect` on the network (e.g. `dockerhadoop_default`) to find the IP the hadoop interfaces are published on. Access these interfaces with the following URLs:
* Namenode: http://<dockerhadoop_IP_address>:50070/dfshealth.html#tab-overview
* History server: http://<dockerhadoop_IP_address>:8188/applicationhistory
* Datanode: http://<dockerhadoop_IP_address>:50075/
* Nodemanager: http://<dockerhadoop_IP_address>:8042/node
* Resource manager: http://<dockerhadoop_IP_address>:8088/
## Configure Environment Variables
The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.): The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.):
``` ```
CORE_CONF_fs_defaultFS=hdfs://namenode:8020 CORE_CONF_fs_defaultFS=hdfs://namenode:8020
@ -37,10 +49,3 @@ The available configurations are:
* /etc/hadoop/kms-site.xml KMS_CONF * /etc/hadoop/kms-site.xml KMS_CONF
If you need to extend some other configuration file, refer to base/entrypoint.sh bash script. If you need to extend some other configuration file, refer to base/entrypoint.sh bash script.
After starting the example Hadoop cluster, you should be able to access interfaces of all the components (substitute domain names by IP addresses from ```network inspect hadoop``` command):
* Namenode: http://namenode:50070/dfshealth.html#tab-overview
* History server: http://historyserver:8188/applicationhistory
* Datanode: http://datanode:50075/
* Nodemanager: http://nodemanager:8042/node
* Resource manager: http://resourcemanager:8088/