public class AvroSequenceFile extends SequenceFileBase
SequenceFilethat also supports reading and writing Avro data.
The vanilla Hadoop
SequenceFile contains a header
followed by a sequence of records. A record consists of a
key and a value. The key and value must
Serializationregistered with the
Since Avro data are Plain Old Java Objects (e.g.,
for data with schema "int"), they do not implement Writable.
cannot determine whether an object instance of type
CharSequence that also implements
be serialized using Avro or WritableSerialization.
The solution implemented in
AvroSequenceFile is to:
SerializationFactory, which will accept only objects that are instances of either
|Modifier and Type||Class and Description|
A Writer for Avro-enabled SequenceFiles using block-level compression.
A reader for SequenceFiles that may contain Avro data.
A Writer for Avro-enabled SequenceFiles using record-level compression.
A writer for an uncompressed SequenceFile that supports Avro data.
|Modifier and Type||Field and Description|
The SequencFile.Metadata field for the Avro key writer schema.
The SequencFile.Metadata field for the Avro value writer schema.
|Modifier and Type||Method and Description|
Creates a writer from a set of options.
public static final Text METADATA_FIELD_KEY_SCHEMA
public static final Text METADATA_FIELD_VALUE_SCHEMA
public static SequenceFile.Writer createWriter(AvroSequenceFile.Writer.Options options) throws IOException
Since there are different implementations of
Writer depending on the
compression type, this method constructs the appropriate subclass depending on the
compression type given in the
options- The options for the writer.
IOException- If the writer cannot be created.
Copyright © 2009-2012 The Apache Software Foundation. All Rights Reserved.