Skip to content

Filter Spark DataFrame using Values from a List rimmalapudi Spark By {Examples}

  • by

In Spark/Pyspark, the filtering DataFrame using values from a list is a transformation operation that is used to select a subset of rows based on a specific condition. The function returns a new DataFrame that contains only the rows that satisfy the condition.

In this article, we shall discuss how to filter Dataframe using values from a List using isin() in both Spark and Pyspark with some examples.

1. Using filter() to Select DataFrame Rows from List of Values

The filter() function is a transformation operation and does not modify the original DataFrame. It takes an expression that evaluates to a Boolean value as input and returns a new DataFrame that contains only the rows for which the expression evaluates to true.

You can also use other comparison operators like <, <=, >, >=, === (equal), and !== (not equal) in the filter expression. You can also use logical operators like && (and), || (or), and ! (not) to combine multiple conditions in the filter expression. Additionally, you can use functions like isNotNull, isNull, and startsWith to filter rows based on specific conditions.

To filter rows from a list of values, we would use the isin() function.

2. Spark Filter Dataframe using a list

Here are some more examples of how to filter a row in a Spark DataFrame based on matching values from a list using Spark Scala:

2.1 Filtering rows based on matching values from a list

You can filter a row in a DataFrame based on matching values from a list using the isin() function of DataFrame API. Here is a Scala example code snippet:

//Imports
import spark.implicits._

// Sample data
val data = Seq(
(1, “apple”),
(2, “banana”),
(3, “orange”),
(4, “mango”)
).toDF(“id”, “fruit”)

// List of values to filter
val filterList = List(“banana”, “mango”)

// Filter rows based on values in the list
val filteredData = data.filter($”fruit”.isin(filterList:_*))

// Display the filtered data
filteredData.show()

// Output:
//+—+——+
//| id| fruit|
//+—+——+
//| 2|banana|
//| 4| mango|
//+—+——+

In this code, we create a DataFrame data with two columns id and fruit. We then define a list of values filterList that we want to use for filtering. We use the isin function on the fruit column of the DataFrame to filter rows that match any of the values in the list. The resulting filtered DataFrame is stored in filteredData and then displayed using the show method.

Note that we use the :_* syntax when passing the filterList as a parameter to the isin function. This syntax is used to unpack the list into individual parameters so that it can be passed as a Vararg to the function.

2.2 Filter rows based on multiple columns

// Create DataFrame
val data = Seq(
(1, “apple”, “red”),
(2, “banana”, “yellow”),
(3, “orange”, “orange”),
(4, “mango”, “orange”)
).toDF(“id”, “fruit”, “color”)

// List of values to filter
val filterList = List((“apple”, “red”), (“orange”, “orange”))

// Filter using contains()
val filteredData = data.filter(row => filterList.contains((row.getAs[String](“fruit”), row.getAs[String](“color”))))

filteredData.show()

// Output:
//+—+——+——+
//| id| fruit| color|
//+—+——+——+
//| 1| apple| red|
//| 3|orange|orange|
//+—+——+——+

In this example, we have a DataFrame data with three columns id, fruit, and color. We define a list of tuples filterList containing the values we want to use for filtering based on multiple columns. We use the filter function with a lambda function that checks whether the tuple (fruit, color) for each row is in the filterList. The resulting filtered DataFrame is stored in filteredData and displayed using the show method.

2.3 Filter rows based on a column of ArrayType

// Import array_contains()
import org.apache.spark.sql.functions.array_contains

// Create DataFrame
val data = Seq(
(1, Array(“apple”, “red”)),
(2, Array(“banana”, “yellow”)),
(3, Array(“orange”, “orange”)),
(4, Array(“mango”, “orange”))
).toDF(“id”, “fruits”)

// List of values
val filterList = List(“apple”, “orange”)

// Filter rows from list of values
val filteredData = data.filter(array_contains($”fruits”, filterList))

filteredData.show()

// Output:
//+—+——+——+
//| id| fruit| color|
//+—+——+——+
//| 1| apple| red|
//| 3|orange|orange|
//+—+——+——+

In this example, we have a DataFrame data with two columns id and fruits, where the fruits column is of ArrayType. We define a list of values filterList that we want to use for filtering. We use the array_contains function to filter rows where the fruits array column contains any of the values in the filterList. The resulting filtered DataFrame is stored in filteredData and displayed using the show method.

2.4 Filter rows based on a nested column

// Create DataFrame
val data = Seq(
(1, (“apple”, “red”)),
(2, (“banana”, “yellow”)),
(3, (“orange”, “orange”)),
(4, (“mango”, “orange”))
).toDF(“id”, “fruit_color”)

// List of values
val filterList = List(“apple”, “orange”)

//Filter
val filteredData = data.filter(row => filterList.contains(row.getAs[(String, String)](“fruit_color”)._1))

filteredData.show()

// Output:
//+—+——+——+
//| id| fruit| color|
//+—+——+——+
//| 1| apple| red|
//| 3|orange|orange|
//| 4|mango |orange|
//+—+——+——+

In this example, we have a DataFrame data with two columns id and fruit_color, where the fruit_color column is a nested column of type (String, String). We define a list of values filterList that we want to use for filtering based on the first element of the tuple in the fruit_color column. We use the filter function with a lambda function that checks whether the first element of the tuple for each row is in the filterList. The resulting filtered DataFrame is stored in filteredData and displayed using the show method.

3. PySpark Filter Dataframe using a list

Here are some more examples of how to filter a row in a DataFrame based on matching values from a list using PySpark:

3.1 Filtering rows based on matching values from a list

In this example, we create a PySpark DataFrame df with two columns id and fruit. We then define a list of values filter_list that we want to use for filtering. We use the isin function on the fruit column of the DataFrame to filter rows that match any of the values in the list. The resulting filtered DataFrame is stored in filtered_df and then displayed using the show method.

#Imports
from pyspark.sql.functions import col

# Sample data
data = [(1, “apple”), (2, “banana”), (3, “orange”), (4, “mango”)]
df = spark.createDataFrame(data, [“id”, “fruit”])

# List of values to filter
filter_list = [“banana”, “mango”]

# Filter rows based on values in the list
filtered_df = df.filter(col(“fruit”).isin(filter_list))

# Display the filtered data
filtered_df.show()

#Output
#+—+——+
#| id| fruit|
#+—+——+
#| 2|banana|
#| 4| mango|
#+—+——+

Note that we use the col function to select the fruit column of the DataFrame and pass it to the isin function. Also, the isin function takes a list of values as input, which is why we directly pass the filter_list to it.

3.2 Filter rows based on multiple columns

In this example, we have a DataFrame df with three columns id, fruit, and color. We define a list of tuples filter_list containing the values we want to use for filtering based on multiple columns. We use the isin function with a tuple of columns (col(“fruit”), col(“color”)) and the filter_list. The resulting filtered DataFrame is stored in filtered_df and displayed using the show method.

# Import
from pyspark.sql.functions import col

# Create DataFrame
data = [(1, “apple”, “red”), (2, “banana”, “yellow”),
(3, “orange”, “orange”), (4, “mango”, “orange”)]
df = spark.createDataFrame(data, [“id”, “fruit”, “color”])

# List of values
filter_list = [(“apple”, “red”), (“orange”, “orange”)]

# Filter
filtered_df = df.filter((col(“fruit”), col(“color”)).isin(filter_list))

filtered_df.show()

#Output:
#+—+——+——+
#| id| fruit| color|
#+—+——+——+
#| 1| apple| red|
#| 3|orange|orange|
#+—+——+——+

3.3 Filter rows based on a column of ArrayType

In this example, we have a DataFrame df with two columns id and fruits, where the fruits column is of ArrayType. We define a list of values filter_list that we want to use for filtering. We use the array_contains function to filter rows where the fruits array column contains any of the values in the filter_list. The resulting filtered DataFrame is stored in filtered_df and displayed using the show method.

# Import
from pyspark.sql.functions import array_contains

# Create DataFrame
data = [(1, [“apple”, “red”]), (2, [“banana”, “yellow”]), (3, [“orange”, “orange”]), (4, [“mango”, “orange”])]
df = spark.createDataFrame(data, [“id”, “fruits”])

# Create list
filter_list = [“apple”, “orange”]

# Filter DataFrame
filtered_df = df.filter(array_contains(col(“fruits”), filter_list))

# Show DataFrame
filtered_df.show()

# Output:
# +—+——-+——+
# | id| fruits| color|
# +—+——-+——+
# | 1| apple | red|
# | 3|orange |orange|
# +—+——-+——+

3.4 Filter rows based on a nested column

In this example, we have a DataFrame df with two columns id and fruit_color, where the fruit_color column is a nested column of type (String, String). We define a list of values filter_list that we want to use for filtering based on the first element of the tuple in the fruit_color column. We use the [] operator to select the first element of the tuple and then the isin function with the filter_list.

The resulting filtered DataFrame is stored in filtered_df and displayed using the show() method.

# Create nested DataFrame
data = [(1, (“apple”, “red”)), (2, (“banana”, “yellow”)), (3, (“orange”, “orange”)), (4, (“mango”, “orange”))]
df = spark.createDataFrame(data, [“id”, “fruit_color”])

# list to be filtered
filter_list = [“apple”, “orange”]

# Filter from DataFrame
filtered_df = df.filter(col(“fruit_color”)[0].isin(filter_list))

# show
filtered_df.show()

# Output:
#+—+——+——+
#| id| fruit| color|
#+—+——+——+
#| 1| apple| red|
#| 3|orange|orange|
#| 4|mango |orange|
#+—+——+——+

4. Conclusion

Filtering a DataFrame using values from a list is a common operation in Spark data processing. It is often required to select a subset of rows based on a specific set of values that are stored in a list. This can be easily achieved using the isin function in Spark, which returns a Boolean column that indicates whether a value is present in the list or not.

Related Articles

Spark RDD filter() with examples

Filter Spark DataFrame Based on Date

PySpark Where Filter Function | Multiple Conditions

Spark Filter startsWith(), endsWith() Examples

Subtracting two DataFrames in Spark?

Spark Filter Using contains() Examples

Spark Filter Rows with NULL Values in DataFrame
 In Spark/Pyspark, the filtering DataFrame using values from a list is a transformation operation that is used to select a subset of rows based on a specific condition. The function returns a new DataFrame that contains only the rows that satisfy the condition. In this article, we shall discuss how to filter Dataframe using values  Read More Apache Spark, PySpark 

Leave a Reply

Your email address will not be published. Required fields are marked *