3 Examples That Show The Unlimited Flexibility of PySpark



Original Source Here

3 Examples That Show The Unlimited Flexibility of PySpark

A combination of Python and SQL but easier than both

Photo by Genessa Panainte on Unsplash

Spark is an analytics engine used for large-scale data processing. It lets you spread both data and computations over clusters to achieve a substantial performance increase.

It is easier than ever to collect, transfer, and store data. Hence, we deal with tremendous amount of data when working on a real life problem. As a result, distributed engines like Spark are becoming a necessity in such cases.

PySpark is a Python API for Spark. It brings us the simplicity of Python syntax so we can easily process and analyze large amounts of data. The SQL module of PySpark takes it one step further and provides us with SQL-like operations.

What I’m trying to get here is that PySpark is an extremely efficient tool with easy-to-use and intuitive syntax. A significant factor that makes PySpark such a simple tool is the flexibility it offers.

Whether you are comfortable working with Pandas or SQL, you will not have hard time learning PySpark. In this article, we will go over 3 common data manipulation operations that demonstrate the its flexibility.

We first need to create a SparkSession which serves as an entry point to Spark SQL.

from pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate()

Let’s create a spark data frame by reading a csv file. We will be using the Melbourne housing dataset available on Kaggle.

file_path = "/home/sparkuser/Downloads/melb_housing.csv"
df = spark.read.csv(file_path, header=True)

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: