Why are my encodings with S3 outputs failing although the credentials are correct?
When you are using cloud storages like AWS S3, or Google Cloud Storage a minimum set of requirements, and valid credentials are necessary for our encoding service to use your bucket as output destination and to operate on it accordingly.
Common Issues and how to solve them
-
Invalid Access/Secret Key pairs - Those values are always a mix of letters, numbers and special characters, most people simply select those strings and copy&paste them. While doing that, it can happen that you accidentally copy a whitespace, tabulator or something else along with it, that eventually causes them to become invalid. Therefore, please ensure that only the string you want to copy&paste is selected and nothing else.
-
Default Bucket ACL rules - Amazon introduced additional default ACL's that are enabled by default when you create a bucket. Those can objects added to the bucket to be made publicly accessible.
- If you want them to become publicly accessible when they are uploaded to your bucket by our encoding service, adjust your settings accordingly.
- Alternatively, make sure that you set the ACLPermissions to
PRIVATE
on all resources associated with files to be written to the output (Muxings, Manifests, Sidecar, etc.)
-
Wrong Bucket Region - When creating an Input/Output resource in the Bitmovin API, it determines the bucket region automatically. Therefore, its highly recommended to use AUTO instead of selecting it manually. If you do that anyway, make sure to select the right region your bucket is located in, otherwise the upload process would fail.
-
Insufficient Permissions - Our service requires a minimum set of permissions to be able to use a bucket as output destination. Learn more about these requirements for necessary permissions.
- If you use a custom policy, make sure that the Resource ARN match the output path you have configured.
-
Overwriting existing files on the storage fails - If you look at the minimum set of requirements, you will see that being able to delete objects is not part of it. This also implies that we are not able to overwrite objects, e.g. if an object with the same name and path already exists. If you want to be able to do that, you would need to add the appropriate permission to your IAM policy.
Updated over 1 year ago