| [[ {SparkR} | R Documentation |
Return subsets of SparkDataFrame according to given conditions
## S4 method for signature 'SparkDataFrame,numericOrcharacter' x[[i]] ## S4 method for signature 'SparkDataFrame' x[i, j, ..., drop = F] ## S4 method for signature 'SparkDataFrame' subset(x, subset, select, drop = F, ...)
x |
A SparkDataFrame |
drop |
if TRUE, a Column will be returned if the resulting dataset has only one column. Otherwise, a SparkDataFrame will always be returned. |
subset |
(Optional) A logical expression to filter on rows |
select |
expression for the single Column or a list of columns to select from the SparkDataFrame |
A new SparkDataFrame containing only the rows that meet the condition with selected columns
Other SparkDataFrame functions: SparkDataFrame-class,
agg, arrange,
as.data.frame, attach,
cache, collect,
colnames, coltypes,
columns, count,
dapply, describe,
dim, distinct,
dropDuplicates, dropna,
drop, dtypes,
except, explain,
filter, first,
group_by, head,
histogram, insertInto,
intersect, isLocal,
join, limit,
merge, mutate,
ncol, persist,
printSchema,
registerTempTable, rename,
repartition, sample,
saveAsTable, selectExpr,
select, showDF,
show, str,
take, unionAll,
unpersist, withColumn,
write.df, write.jdbc,
write.json, write.parquet,
write.text
Other subsetting functions: filter,
select
## Not run:
##D # Columns can be selected using `[[` and `[`
##D df[[2]] == df[["age"]]
##D df[,2] == df[,"age"]
##D df[,c("name", "age")]
##D # Or to filter rows
##D df[df$age > 20,]
##D # SparkDataFrame can be subset on both rows and Columns
##D df[df$name == "Smith", c(1,2)]
##D df[df$age %in% c(19, 30), 1:2]
##D subset(df, df$age %in% c(19, 30), 1:2)
##D subset(df, df$age %in% c(19), select = c(1,2))
##D subset(df, select = c(1,2))
## End(Not run)