scala - Using keytab file in spark standalone program -
i trying access file hdfs in standalone scala program using apache-spark. following error upon execution.
simple authentication not enabled. available:[token,kerberos]
i found this question explains need create keytab file , make standalone program use . have generated keytab file . tell me how can use program.
any appreciated.
ps - using hadoop 2.3.0 , spark 0.9.0
update : how core-site.xml looks :
<?xml version="1.0" encoding="utf-8"?> <!--autogenerated cloudera manager--> <configuration> <property> <name>fs.defaultfs</name> <value>hdfs://ushadoop</value> </property> <property> <name>fs.trash.interval</name> <value>1</value> </property> <property> <name>io.compression.codecs</name> <value>org.apache.hadoop.io.compress.defaultcodec,org.apache.hadoop.io.compress.gzipcodec,org.apache.hadoop.io.compress.bzip2codec,org.apache.hadoop.io.compress.deflatecodec,org.apache.hadoop.io.compress.snappycodec,org.apache.hadoop.io.compress.lz4codec</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.rpc.protection</name> <value>authentication</value> </property> <property> <name>hadoop.security.auth_to_local</name> <value>default</value> </property> </configuration>
Comments
Post a Comment