KEMBAR78
Securing Your Apache Spark Applications | PPTX
Securing Spark Applications
Kostas Sakellis
Marcelo Vanzin
What is Security?
• Security has many facets
• This talk will focus on three areas:
– Encryption
– Authentication
– Authorization
Why do I need security?
• Multi-tenancy
• Application isolation
• User identification
• Access control enforcement
• Compliance with government regulations
Before we go further...
• Set up Kerberos
• Use HDFS (or another secure filesystem)
• Use YARN!
• Configure them for security (enable auth, encryption).
Kerberos, HDFS, and YARN provide the security backbone
for Spark.
Encryption
• In a secure cluster, data should not be visible in the clear
• Very important to financial / government institutions
What a Spark app looks like
RM NM NM
AM / Driver Executor
Executor
SparkSubmit
Control RPC
File Download
Shuffle / Cached Blocks
Shuffle
Service
Shuffle
Service
Shuffle Blocks
UI
Shuffle Blocks / Metadata
Data Flow in Spark
Every connection in the previous slide can transmit sensitive
data!
• Input data transmitted via broadcast variables
• Computed data during shuffles
• Data in serialized tasks, files uploaded with the job
How to prevent other users from seeing this data?
Encryption in Spark
• Almost all channels support encryption.
– Exception 1: UI (SPARK-2750)
– Exception 2: local shuffle / cache files (SPARK-5682)
For local files, set up YARN local dirs to point at local
encrypted disk(s) if desired. (SPARK-5682)
Encryption: Current State
Different channel, different method.
• Shuffle protocol uses SASL
• RPC / File download use SSL
SSL can be hard to set up.
• Need certificates readable on every node
• Sharing certificates not as secure
• Hard to have per-user certificate
Encryption: The Goal
SASL everywhere for wire encryption (except UI).
• Minimum configuration (one boolean config)
• Uses built-in JVM libraries
• SPARK-6017
For UI:
• Support for SSL
• Or audit UI to remove sensitive info (e.g. information on
environment page).
Authentication
Who is reading my data?
• Spark uses Kerberos
– the necessary evil
• Ubiquitous among other services
– YARN, HDFS, Hive, HBase etc.
Who’s reading my data?
Kerberos provides secure authentication.
KDC
Application
Hi I’m Bob.
Hello Bob. Here’s your TGT.
Here’s my TGT. I want to talk to HDFS.
Here’s your HDFS ticket.
User
Now with a distributed app...
KDC
Executor
Executor
Executor
Executor
Executor
Executor
Executor
Executor
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Hi I’m Bob.
Something
is wrong.
Kerberos in Hadoop / Spark
KDCs do not allow multiple concurrent logins at the scale
distributed applications need. Hadoop services use
delegation tokens instead.
Driver
NameNode
Executor
DataNode
Delegation Tokens
Like Kerberos tickets, they have a TTL.
• OK for most batch applications.
• Not OK for long running applications
– Streaming
– Spark SQL Thrift Server
Delegation Tokens
Since 1.4, Spark can manage delegation tokens!
• Restricted to HDFS currently
• Requires user’s keytab to be deployed with application
• Still some remaining issues in client deploy mode
Authorization
How can I share my data?
Simplest form of authorization: file permissions.
• Use Unix-style permissions or ACLs to let others read
from and / or write to files and directories
• Simple, but high maintenance. Set permissions /
ownership for new files, mess with umask, etc.
More than just FS semantics...
Authorization becomes more complicated as abstractions
are created.
• Tables, columns, partitions instead of files and
directories
• Semantic gap
• Need a trusted entity to enforce access control
Trusted Service: Hive
Hive has a trusted service (“HiveServer2”) for enforcing
authorization.
• HS2 parses queries and makes sure users have access
to the data they’re requesting / modifying.
HS2 runs as a trusted user with access to the whole
warehouse. Users don’t run code directly in HS2*, so there’s
no danger of code escaping access checks.
Untrusted Apps: Spark
Each Spark app runs as the requesting user, and needs
access to the underlying files.
• Spark itself cannot enforce access control, since it’s
running as the user and is thus untrusted.
• Restricted to file system permission semantics.
How to bridge the two worlds?
Apache Sentry
• Role-based access control to resources
• Integrates with Hive / HS2 to control access to data
• Fine-grained (up to column level) controls
Hive data and HDFS data have different semantics. How to
bridge that?
The Sentry HDFS Plugin
Synchronize HDFS file permissions with higher-level
abstractions.
• Permission to read table = permission to read table’s
files
• Permission to create table = permission to write to
database’s directory
Uses HDFS ACLs for fine-grained user permissions.
Still restricted to FS view of the world!
• Files, directories, etc…
• Cannot provide column-level and row-level access
control.
• Whole table or nothing.
Still, it goes a long way in allowing Spark applications to
work well with Hive data in a shared, secure environment.
But...
Future: RecordService
A distributed, scalable, data access service for unified
authorization in Hadoop.
RecordService
RecordService
• Drop in replacement for InputFormats
• SparkSQL: Integration with Data Sources API
– Predicate pushdown, projection
RecordService
• Assume we had a table tpch.nation
column_name column_type
n_nationkey smallint
n_name string
n_regionkey smallint
n_comment string
import com.cloudera.recordservice.spark._
val context = new org.apache.spark.sql.SQLContext(sc)
val df = context.load("tpch.nation",
"com.cloudera.recordservice.spark")
val results = df.groupBy("n_regionkey")
.count()
.collect()
RecordService
RecordService
• Users can enforce Sentry permissions using views
• Allows column and row level security
> CREATE ROLE restrictedrole;
> GRANT ROLE restrictedrole to GROUP restrictedgroup;
> USE tpch;
> CREATE VIEW nation_names AS
SELECT n_nationkey, n_name
FROM tpch.nation;
> GRANT SELECT ON TABLE tpch.nation_names TO ROLE restrictedrole;
...
val df = context.load("tpch.nation",
"com.cloudera.recordservice.spark")
val results = df.collect()
>> TRecordServiceException(code:INVALID_REQUEST, message:Could not plan
request., detail:AuthorizationException: User 'kostas' does not have
privileges to execute 'SELECT' on: tpch.nation)
RecordService
...
val df = context.load("tpch.nation_names",
"com.cloudera.recordservice.spark")
val results = df.collect()
RecordService
RecordService
• Documentation: http://cloudera.github.io/RecordServiceClient/
• Beta Download:
http://www.cloudera.com/content/cloudera/en/downloads/betas/recordservic
e/0-1-0.html
Takeaways
• Spark can be made secure today!
• Benefits from a lot of existing Hadoop platform work
• Still work to be done
– Ease of use
– Better integration with Sentry / RecordService
References
• Encryption: SPARK-6017, SPARK-5682
• Delegation tokens: SPARK-5342
• Sentry: http://sentry.apache.org/
– HDFS synchronization: SENTRY-432
• RecordService:
http://cloudera.github.io/RecordServiceClient/
Thanks!
Questions?

Securing Your Apache Spark Applications

  • 1.
    Securing Spark Applications KostasSakellis Marcelo Vanzin
  • 2.
    What is Security? •Security has many facets • This talk will focus on three areas: – Encryption – Authentication – Authorization
  • 3.
    Why do Ineed security? • Multi-tenancy • Application isolation • User identification • Access control enforcement • Compliance with government regulations
  • 4.
    Before we gofurther... • Set up Kerberos • Use HDFS (or another secure filesystem) • Use YARN! • Configure them for security (enable auth, encryption). Kerberos, HDFS, and YARN provide the security backbone for Spark.
  • 5.
    Encryption • In asecure cluster, data should not be visible in the clear • Very important to financial / government institutions
  • 6.
    What a Sparkapp looks like RM NM NM AM / Driver Executor Executor SparkSubmit Control RPC File Download Shuffle / Cached Blocks Shuffle Service Shuffle Service Shuffle Blocks UI Shuffle Blocks / Metadata
  • 7.
    Data Flow inSpark Every connection in the previous slide can transmit sensitive data! • Input data transmitted via broadcast variables • Computed data during shuffles • Data in serialized tasks, files uploaded with the job How to prevent other users from seeing this data?
  • 8.
    Encryption in Spark •Almost all channels support encryption. – Exception 1: UI (SPARK-2750) – Exception 2: local shuffle / cache files (SPARK-5682) For local files, set up YARN local dirs to point at local encrypted disk(s) if desired. (SPARK-5682)
  • 9.
    Encryption: Current State Differentchannel, different method. • Shuffle protocol uses SASL • RPC / File download use SSL SSL can be hard to set up. • Need certificates readable on every node • Sharing certificates not as secure • Hard to have per-user certificate
  • 10.
    Encryption: The Goal SASLeverywhere for wire encryption (except UI). • Minimum configuration (one boolean config) • Uses built-in JVM libraries • SPARK-6017 For UI: • Support for SSL • Or audit UI to remove sensitive info (e.g. information on environment page).
  • 11.
    Authentication Who is readingmy data? • Spark uses Kerberos – the necessary evil • Ubiquitous among other services – YARN, HDFS, Hive, HBase etc.
  • 12.
    Who’s reading mydata? Kerberos provides secure authentication. KDC Application Hi I’m Bob. Hello Bob. Here’s your TGT. Here’s my TGT. I want to talk to HDFS. Here’s your HDFS ticket. User
  • 13.
    Now with adistributed app... KDC Executor Executor Executor Executor Executor Executor Executor Executor Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Hi I’m Bob. Something is wrong.
  • 14.
    Kerberos in Hadoop/ Spark KDCs do not allow multiple concurrent logins at the scale distributed applications need. Hadoop services use delegation tokens instead. Driver NameNode Executor DataNode
  • 15.
    Delegation Tokens Like Kerberostickets, they have a TTL. • OK for most batch applications. • Not OK for long running applications – Streaming – Spark SQL Thrift Server
  • 16.
    Delegation Tokens Since 1.4,Spark can manage delegation tokens! • Restricted to HDFS currently • Requires user’s keytab to be deployed with application • Still some remaining issues in client deploy mode
  • 17.
    Authorization How can Ishare my data? Simplest form of authorization: file permissions. • Use Unix-style permissions or ACLs to let others read from and / or write to files and directories • Simple, but high maintenance. Set permissions / ownership for new files, mess with umask, etc.
  • 18.
    More than justFS semantics... Authorization becomes more complicated as abstractions are created. • Tables, columns, partitions instead of files and directories • Semantic gap • Need a trusted entity to enforce access control
  • 19.
    Trusted Service: Hive Hivehas a trusted service (“HiveServer2”) for enforcing authorization. • HS2 parses queries and makes sure users have access to the data they’re requesting / modifying. HS2 runs as a trusted user with access to the whole warehouse. Users don’t run code directly in HS2*, so there’s no danger of code escaping access checks.
  • 20.
    Untrusted Apps: Spark EachSpark app runs as the requesting user, and needs access to the underlying files. • Spark itself cannot enforce access control, since it’s running as the user and is thus untrusted. • Restricted to file system permission semantics. How to bridge the two worlds?
  • 21.
    Apache Sentry • Role-basedaccess control to resources • Integrates with Hive / HS2 to control access to data • Fine-grained (up to column level) controls Hive data and HDFS data have different semantics. How to bridge that?
  • 22.
    The Sentry HDFSPlugin Synchronize HDFS file permissions with higher-level abstractions. • Permission to read table = permission to read table’s files • Permission to create table = permission to write to database’s directory Uses HDFS ACLs for fine-grained user permissions.
  • 23.
    Still restricted toFS view of the world! • Files, directories, etc… • Cannot provide column-level and row-level access control. • Whole table or nothing. Still, it goes a long way in allowing Spark applications to work well with Hive data in a shared, secure environment. But...
  • 24.
    Future: RecordService A distributed,scalable, data access service for unified authorization in Hadoop.
  • 25.
  • 26.
    RecordService • Drop inreplacement for InputFormats • SparkSQL: Integration with Data Sources API – Predicate pushdown, projection
  • 27.
    RecordService • Assume wehad a table tpch.nation column_name column_type n_nationkey smallint n_name string n_regionkey smallint n_comment string
  • 28.
    import com.cloudera.recordservice.spark._ val context= new org.apache.spark.sql.SQLContext(sc) val df = context.load("tpch.nation", "com.cloudera.recordservice.spark") val results = df.groupBy("n_regionkey") .count() .collect() RecordService
  • 29.
    RecordService • Users canenforce Sentry permissions using views • Allows column and row level security > CREATE ROLE restrictedrole; > GRANT ROLE restrictedrole to GROUP restrictedgroup; > USE tpch; > CREATE VIEW nation_names AS SELECT n_nationkey, n_name FROM tpch.nation; > GRANT SELECT ON TABLE tpch.nation_names TO ROLE restrictedrole;
  • 30.
    ... val df =context.load("tpch.nation", "com.cloudera.recordservice.spark") val results = df.collect() >> TRecordServiceException(code:INVALID_REQUEST, message:Could not plan request., detail:AuthorizationException: User 'kostas' does not have privileges to execute 'SELECT' on: tpch.nation) RecordService
  • 31.
    ... val df =context.load("tpch.nation_names", "com.cloudera.recordservice.spark") val results = df.collect() RecordService
  • 32.
    RecordService • Documentation: http://cloudera.github.io/RecordServiceClient/ •Beta Download: http://www.cloudera.com/content/cloudera/en/downloads/betas/recordservic e/0-1-0.html
  • 33.
    Takeaways • Spark canbe made secure today! • Benefits from a lot of existing Hadoop platform work • Still work to be done – Ease of use – Better integration with Sentry / RecordService
  • 34.
    References • Encryption: SPARK-6017,SPARK-5682 • Delegation tokens: SPARK-5342 • Sentry: http://sentry.apache.org/ – HDFS synchronization: SENTRY-432 • RecordService: http://cloudera.github.io/RecordServiceClient/
  • 35.