Aliyun Object Storage Service (Aliyun OSS) is widely used especially among China’s cloud users, and it provides cloud object storage for a variety of use cases.
Hadoop file system supports OSS since version 2.9.1. Now, you can also use OSS with Fink for reading and writing data.
You can access OSS objects like this:
Below shows how to use OSS with Flink:
There are two ways to use OSS with Flink, our shaded flink-oss-fs-hadoop will cover most scenarios. However, you may need to set up a specific Hadoop OSS FileSystem implementation if you want use OSS as YARN’s resource storage dir (This patch enables YARN to use OSS). Both ways are described below.
Shaded Hadoop OSS file system (recommended)
In order to use flink-oss-fs-hadoop, copy the respective JAR file from the opt directory to the lib directory of your Flink distribution before starting Flink, e.g.
flink-oss-fs-hadoop registers default FileSystem wrappers for URIs with the oss:// scheme.
After setting up the OSS FileSystem wrapper, you need to add some configurations to make sure that Flink is allowed to access your OSS buckets.
In order to use OSS with Flink more easily, you can use the same configuration keys in flink-conf.yaml as in Hadoop’s core-site.xml
There are some required configurations that must be added to flink-conf.yaml (Other configurations defined in Hadoop OSS documentation are advanced configurations which used by performance tuning):
Hadoop-provided OSS file system - manual setup
This setup is a bit more complex and we recommend using our shaded Hadoop file systems instead (see above) unless required otherwise, e.g. for using OSS as YARN’s resource storage dir via the fs.defaultFS configuration property in Hadoop’s core-site.xml.
Set OSS FileSystem
You need to point Flink to a valid Hadoop configuration, which contains the following properties in core-site.xml:
You can specify the Hadoop configuration in various ways pointing Flink to
the path of the Hadoop configuration directory, for example
by setting the environment variable HADOOP_CONF_DIR, or
by setting the fs.hdfs.hadoopconf configuration option in flink-conf.yaml:
This registers /path/to/etc/hadoop as Hadoop’s configuration directory with Flink. Flink will look for the core-site.xml and hdfs-site.xml files in the specified directory.
Provide OSS FileSystem Dependency
You can find Hadoop OSS FileSystem are packaged in the hadoop-aliyun artifact. This JAR and all its dependencies need to be added to Flink’s classpath, i.e. the class path of both Job and TaskManagers.
There are multiple ways of adding JARs to Flink’s class path, the easiest being simply to drop the JARs in Flink’s lib folder. You need to copy the hadoop-aliyun JAR with all its dependencies (You can find these as part of the Hadoop binaries in hadoop-3/share/hadoop/tools/lib). You can also export the directory containing these JARs as part of the HADOOP_CLASSPATH environment variable on all machines.
Below is an example shows the result of our setup (data is generated by TPC-DS tool)
Could not find OSS file system
If your job submission fails with an Exception message like below, please check if our shaded jar (flink-oss-fs-hadoop-1.9-SNAPSHOT.jar) is in the lib dir.
If your job submission fails with an Exception message like below, please check if the corresponding configurations exits in flink-conf.yaml