| SparkDataFrame-class {SparkR} | R Documentation |
DataFrames can be created using functions like createDataFrame, read.json, table etc.
dataFrame(sdf, isCached = FALSE)
sdf |
A Java object reference to the backing Scala DataFrame |
isCached |
TRUE if the SparkDataFrame is cached |
envAn R environment that stores bookkeeping states of the SparkDataFrame
sdfA Java object reference to the backing Scala DataFrame
createDataFrame, read.json, table
https://spark.apache.org/docs/latest/sparkr.html#sparkr-dataframes
Other SparkDataFrame functions: [[,
agg, arrange,
as.data.frame, attach,
cache, collect,
colnames, coltypes,
columns, count,
dapply, describe,
dim, distinct,
dropDuplicates, dropna,
drop, dtypes,
except, explain,
filter, first,
group_by, head,
histogram, insertInto,
intersect, isLocal,
join, limit,
merge, mutate,
ncol, persist,
printSchema,
registerTempTable, rename,
repartition, sample,
saveAsTable, selectExpr,
select, showDF,
show, str,
take, unionAll,
unpersist, withColumn,
write.df, write.jdbc,
write.json, write.parquet,
write.text
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D df <- createDataFrame(sqlContext, faithful)
## End(Not run)