# Hadoop Docker To deploy an example HDFS cluster, run: ``` docker network create hadoop docker-compose up ``` The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.): ``` CORE_CONF_fs_defaultFS=hdfs://namenode:8020 ``` CORE_CONF corresponds to core-site.xml. fs_defaultFS=hdfs://namenode:8020 will be transformed into: ``` fs.defaultFShdfs://namenode:8020 ``` To define dash inside a configuration parameter, use double underscore, such as YARN_CONF_yarn_log___aggregation___enable=true (yarn-site.xml): ``` yarn.log-aggregation-enabletrue ``` The available configurations are: * /etc/hadoop/core-site.xml CORE_CONF * /etc/hadoop/hdfs-site.xml HDFS_CONF * /etc/hadoop/yarn-site.xml YARN_CONF * /etc/hadoop/httpfs-site.xml HTTPFS_CONF * /etc/hadoop/kms-site.xml KMS_CONF If you need to extend some other configuration file, refer to base/entrypoint.sh bash script. After starting the example Hadoop cluster, you should be able to access interfaces of all the components (substitute domain names by IP addresses from ```network inspect hadoop``` command): * Namenode: http://namenode:50070/dfshealth.html#tab-overview * History server: http://historyserver:8188/applicationhistory * Datanode: http://datanode:50075/ * Nodemanager: http://nodemanager:8042/node * Resource manager: http://resourcemanager:8088/