KEMBAR78
Optimizing Apache Spark SQL Joins | PPTX
Optimizing
Apache Spark SQL Joins
Vida Ha
Solutions Architect
About Me
2005 Mobile Web & Voice Search
2
About Me
2005 Mobile Web & Voice Search
3
2012 Reporting & Analytics
About Me
2005 Mobile Web & Voice Search
4
2012 Reporting & Analytics
2014 Solutions Architect
Evolution of Spark…
5
2014:
• Spark 1.x
• RDD based API’s.
• Everyday I’m Shufflin’
2017:
• Spark 2.x
• Dataframes & Datasets
• Adv SQL Catalyst
• Optimizing Joins
6
SELECT …
FROM TABLE A
JOIN TABLE B
ON A.KEY1 = B.KEY2
Spark SQL Joins
Topics Covered Today
7
Basic Joins:
• Shuffle Hash Join
• Troubleshooting
• Broadcast Hash Join
• Cartesian Join
Special Cases:
• Theta Join
• One to Many Join
Shuffle Hash Join
8
A Shuffle Hash Join is the most basic type of
join, and goes back to Map Reduce
Fundamentals.
• Map through two different data frames/tables.
• Use the fields in the join condition as the output
key.
• Shuffle both datasets by the output key.
• In the reduce phase, join the two datasets now
any rows of both tables with the same keys are on
the same machine and are sorted.
Shuffle Hash Join
9
Table 1 Table 2MAP
SHUFFLE
REDUCE Output Output Output Output Output
join_rdd = sqlContext.sql(“select *
FROM people_in_the_us
JOIN states
ON people_in_the_us.state = states.name”)
Shuffle Hash Join Performance
Works best when the DF’s:
• Distribute evenly with the key you are joining on.
• Have an adequate number of keys for parallelism.
US DF
Partition 1
Problems:
● Uneven
Sharding
● Limited
parallelism w/
50 output
partitions
US RDD
Partition 2
US RDD
Partition 2**All** the
Data for CA
**All** the
Data for RI
CA
RI
All the data
for the US
will be
shuffled
into only 50
keys for
each of the
states.
Uneven Sharding & Limited Parallelism,
US DF
Partition 2
US DF
Partition N Small State
DF
A larger Spark Cluster will not solve these
problems!
US DF
Partition 1
Problems:
● Uneven
Sharding
● Limited
parallelism w/
50 output
partitions
US RDD
Partition 2
US RDD
Partition 2**All** the
Data for CA
**All** the
Data for RI
CA
RI
All the data
for the US
will be
shuffled
into only 50
keys for
each of the
states.
Uneven Sharding & Limited Parallelism,
US DF
Partition 2
US DF
Partition N Small State
DF
Broadcast Hash Join can address this problem if
one DF is small enough to fit in memory.
join_rdd = sqlContext.sql(“select *
FROM people_in_california
LEFT JOIN all_the_people_in_the_world
ON people_in_california.id =
all_the_people_in_the_world.id”)
More Performance Considerations
Final output keys = # of people in CA, so don’t
need a huge Spark cluster, right?
The Size of the Spark Cluster to run this job is limited
by the Large table rather than the Medium Sized Table.
Left Join - Shuffle Step
Not a Problem:
● Even Sharding
● Good Parallelism
Shuffles everything
before dropping keys
All CA DF All World
DF
All the Data from
Both Tables
Final
Joined
Output
A Better Solution
Filter the World DF for only entries that match the CA ID
Filter Transform
Benefits:
● Less Data shuffled
over the network
and less shuffle
space needed.
● More transforms,
but still faster.
Shuffle
All CA DF All World
DF
Final
Joined
Output
Partial
World DF
● Can’t tell you.
● There aren’t always strict rules for optimizing.
● If you were only considering two small
columns from the World RDD in Parquet
format, the filtering step may not be worth it.
You should understand your data and it’s unique properties in
order to best optimize your Spark Job.
What’s the Tipping Point for Huge?
Things to Look for:
● Tasks that take much longer to run than others.
● Speculative tasks that are launching.
● Shards that have a lot more input or shuffle output.
Check the Spark
UI pages for task
level detail about
your Spark job.
In Practice: Detecting Shuffle Problems
Broadcast Hash Join
18
Parallelism of the large DF is maintained (n output
partitions), and shuffle is not even needed.
Broadcast
Large DF
Partition N
Large DF
Partition 1
Large DF
Partition 2
Optimization: When one of the DF’s is small
enough to fit in memory on a single machine.
Small DF
Small DF Small DF Small DF
Broadcast Hash Join
19
• Often optimal over Shuffle Hash Join.
• Use “explain” to determine if the Spark SQL
catalyst hash chosen Broadcast Hash Join.
• Should be automatic for many Spark SQL tables,
may need to provide hints for other types.
Cartesian Join
20
• A cartesian join can easily explode the number of output
rows.
100,000 X 100,000 = 10 Billion
• Alternative to a full blown cartesian join:
• Create an RDD of UID by UID.
• Force a Broadcast of the rows of the table .
• Call a UDF given the UID by UID to look up the table
rows and perform your calculation.
• Time your calculation on a sample set to size your cluster.
One To Many Join
21
• A single row on one table can map to many rows on the
2nd table.
• Can explode the number of output rows.
• Not a problem if you use parquet - the size of the output
files is not that much since the duplicate data encodes
well.
Theta Join
22
• Spark SQL consider each keyA against each keyB in
the example above and loop to see if the theta
condition is met.
• Better Solution - create buckets for keyA and KeyB
can be matched on.
join_rdd = sqlContext.sql(“select *
FROM tableA
JOIN tableB
ON (keyA < keyB + 10)”)
Thank you
Questions?

Optimizing Apache Spark SQL Joins

  • 1.
    Optimizing Apache Spark SQLJoins Vida Ha Solutions Architect
  • 2.
    About Me 2005 MobileWeb & Voice Search 2
  • 3.
    About Me 2005 MobileWeb & Voice Search 3 2012 Reporting & Analytics
  • 4.
    About Me 2005 MobileWeb & Voice Search 4 2012 Reporting & Analytics 2014 Solutions Architect
  • 5.
    Evolution of Spark… 5 2014: •Spark 1.x • RDD based API’s. • Everyday I’m Shufflin’ 2017: • Spark 2.x • Dataframes & Datasets • Adv SQL Catalyst • Optimizing Joins
  • 6.
    6 SELECT … FROM TABLEA JOIN TABLE B ON A.KEY1 = B.KEY2 Spark SQL Joins
  • 7.
    Topics Covered Today 7 BasicJoins: • Shuffle Hash Join • Troubleshooting • Broadcast Hash Join • Cartesian Join Special Cases: • Theta Join • One to Many Join
  • 8.
    Shuffle Hash Join 8 AShuffle Hash Join is the most basic type of join, and goes back to Map Reduce Fundamentals. • Map through two different data frames/tables. • Use the fields in the join condition as the output key. • Shuffle both datasets by the output key. • In the reduce phase, join the two datasets now any rows of both tables with the same keys are on the same machine and are sorted.
  • 9.
    Shuffle Hash Join 9 Table1 Table 2MAP SHUFFLE REDUCE Output Output Output Output Output
  • 10.
    join_rdd = sqlContext.sql(“select* FROM people_in_the_us JOIN states ON people_in_the_us.state = states.name”) Shuffle Hash Join Performance Works best when the DF’s: • Distribute evenly with the key you are joining on. • Have an adequate number of keys for parallelism.
  • 11.
    US DF Partition 1 Problems: ●Uneven Sharding ● Limited parallelism w/ 50 output partitions US RDD Partition 2 US RDD Partition 2**All** the Data for CA **All** the Data for RI CA RI All the data for the US will be shuffled into only 50 keys for each of the states. Uneven Sharding & Limited Parallelism, US DF Partition 2 US DF Partition N Small State DF A larger Spark Cluster will not solve these problems!
  • 12.
    US DF Partition 1 Problems: ●Uneven Sharding ● Limited parallelism w/ 50 output partitions US RDD Partition 2 US RDD Partition 2**All** the Data for CA **All** the Data for RI CA RI All the data for the US will be shuffled into only 50 keys for each of the states. Uneven Sharding & Limited Parallelism, US DF Partition 2 US DF Partition N Small State DF Broadcast Hash Join can address this problem if one DF is small enough to fit in memory.
  • 13.
    join_rdd = sqlContext.sql(“select* FROM people_in_california LEFT JOIN all_the_people_in_the_world ON people_in_california.id = all_the_people_in_the_world.id”) More Performance Considerations Final output keys = # of people in CA, so don’t need a huge Spark cluster, right?
  • 14.
    The Size ofthe Spark Cluster to run this job is limited by the Large table rather than the Medium Sized Table. Left Join - Shuffle Step Not a Problem: ● Even Sharding ● Good Parallelism Shuffles everything before dropping keys All CA DF All World DF All the Data from Both Tables Final Joined Output
  • 15.
    A Better Solution Filterthe World DF for only entries that match the CA ID Filter Transform Benefits: ● Less Data shuffled over the network and less shuffle space needed. ● More transforms, but still faster. Shuffle All CA DF All World DF Final Joined Output Partial World DF
  • 16.
    ● Can’t tellyou. ● There aren’t always strict rules for optimizing. ● If you were only considering two small columns from the World RDD in Parquet format, the filtering step may not be worth it. You should understand your data and it’s unique properties in order to best optimize your Spark Job. What’s the Tipping Point for Huge?
  • 17.
    Things to Lookfor: ● Tasks that take much longer to run than others. ● Speculative tasks that are launching. ● Shards that have a lot more input or shuffle output. Check the Spark UI pages for task level detail about your Spark job. In Practice: Detecting Shuffle Problems
  • 18.
    Broadcast Hash Join 18 Parallelismof the large DF is maintained (n output partitions), and shuffle is not even needed. Broadcast Large DF Partition N Large DF Partition 1 Large DF Partition 2 Optimization: When one of the DF’s is small enough to fit in memory on a single machine. Small DF Small DF Small DF Small DF
  • 19.
    Broadcast Hash Join 19 •Often optimal over Shuffle Hash Join. • Use “explain” to determine if the Spark SQL catalyst hash chosen Broadcast Hash Join. • Should be automatic for many Spark SQL tables, may need to provide hints for other types.
  • 20.
    Cartesian Join 20 • Acartesian join can easily explode the number of output rows. 100,000 X 100,000 = 10 Billion • Alternative to a full blown cartesian join: • Create an RDD of UID by UID. • Force a Broadcast of the rows of the table . • Call a UDF given the UID by UID to look up the table rows and perform your calculation. • Time your calculation on a sample set to size your cluster.
  • 21.
    One To ManyJoin 21 • A single row on one table can map to many rows on the 2nd table. • Can explode the number of output rows. • Not a problem if you use parquet - the size of the output files is not that much since the duplicate data encodes well.
  • 22.
    Theta Join 22 • SparkSQL consider each keyA against each keyB in the example above and loop to see if the theta condition is met. • Better Solution - create buckets for keyA and KeyB can be matched on. join_rdd = sqlContext.sql(“select * FROM tableA JOIN tableB ON (keyA < keyB + 10)”)
  • 23.
  • 24.

Editor's Notes

  • #2 PRESENTER: Underline text added for extra emphasis