This document provides an introduction to Hadoop, outlining its core components, ecosystem, and various programming models such as centralized and distributed computing. It discusses the principles of big data processing, including the need for scalability, fault tolerance, and resource management, while highlighting Hadoop's role in managing large volumes of structured and unstructured data. Key concepts such as HDFS, MapReduce, YARN, and the integration of additional ecosystem tools like Spark and Hive are also explored.