Merge pull request #17 from zeddee/cleaner-readme

Cleaned up readme for better usability
This commit is contained in:
Ivan Ermilov 2018-01-07 18:07:54 +01:00 committed by GitHub
commit a24f8af944
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -10,13 +10,25 @@ Version 1.1.0 introduces healthchecks for the containers.
* 2.7.1 with OpenJDK 7 * 2.7.1 with OpenJDK 7
* 2.7.1 with OpenJDK 8 * 2.7.1 with OpenJDK 8
## Description ## Quick Start
To deploy an example HDFS cluster, run: To deploy an example HDFS cluster, run:
``` ```
docker-compose up docker-compose up
``` ```
`docker-compose` creates a docker network that can be found by running `docker network list`, e.g. `dockerhadoop_default`.
Run `docker network inpect` on the network (e.g. `dockerhadoop_default`) to find the IP the hadoop interfaces are published on. Access these interfaces with the following URLs:
* Namenode: http://<dockerhadoop_IP_address>:50070/dfshealth.html#tab-overview
* History server: http://<dockerhadoop_IP_address>:8188/applicationhistory
* Datanode: http://<dockerhadoop_IP_address>:50075/
* Nodemanager: http://<dockerhadoop_IP_address>:8042/node
* Resource manager: http://<dockerhadoop_IP_address>:8088/
## Configure Environment Variables
The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.): The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.):
``` ```
CORE_CONF_fs_defaultFS=hdfs://namenode:8020 CORE_CONF_fs_defaultFS=hdfs://namenode:8020
@ -39,10 +51,3 @@ The available configurations are:
* /etc/hadoop/kms-site.xml KMS_CONF * /etc/hadoop/kms-site.xml KMS_CONF
If you need to extend some other configuration file, refer to base/entrypoint.sh bash script. If you need to extend some other configuration file, refer to base/entrypoint.sh bash script.
After starting the example Hadoop cluster, you should be able to access interfaces of all the components (substitute domain names by IP addresses from ```network inspect hadoop``` command):
* Namenode: http://namenode:50070/dfshealth.html#tab-overview
* History server: http://historyserver:8188/applicationhistory
* Datanode: http://datanode:50075/
* Nodemanager: http://nodemanager:8042/node
* Resource manager: http://resourcemanager:8088/