Skip to main content

npx remotion cloudrun 渲染

EXPERIMENTAL

使用 npx remotion cloudrun render 命令,你可以在 GCP 上渲染视频。

🌐 Using the npx remotion cloudrun render command, you can render a video on GCP.

一个命令的结构如下:

🌐 The structure of a command is as follows:

npx remotion cloudrun render <serve-url> [<composition-id>] [<output-location>]
  • 通过使用 sites create 命令将 Remotion 项目部署到 GCP 存储桶或调用 deployService() 来获取服务 URL。
  • 作品 ID。如果未指定,将获取作品列表,你可以选择一个作品。
  • output-location 参数是可选的。如果你不指定它,视频将存储在你的云存储桶中。如果你指定了位置,它会在额外的步骤中下载到你的设备。

示例命令

🌐 Example commands

渲染视频,传递服务名称:

🌐 Rendering a video, passing the service name:

npx remotion cloudrun render https://storage.googleapis.com/remotioncloudrun-123asd321/sites/abcdefgh/index.html tiles --service-name=remotion--3-3-82--mem512mi--cpu1-0--t-800

使用站点名称而不是完整的服务器 URL:

🌐 Using the site name as opposed to the full serve-url:

npx remotion cloudrun render test-site tiles --service-name=remotion--3-3-82--mem512mi--cpu1-0--t-800

传入输入属性:

🌐 Passing in input props:

npx remotion cloudrun render test-site tiles --service-name=remotion--3-3-82--mem512mi--cpu1-0--t-800 --props='{"hi": "there"}'

标志

🌐 Flags

--region

要选择的 GCP 区域。为了降低延迟,服务、站点和输出存储桶应位于同一地区。

🌐 The GCP region to select. For lowest latency, the service, site and output bucket should be in the same region.

--props

Input Props to pass to the selected composition of your video. Must be a serialized JSON string (--props='{"hello": "world"}') or a path to a JSON file (./path/to/props.json).
note

Windows 终端不支持内联 JSON 字符串,因为它会移除 " 字符,请改用文件名。

--privacy

其中之一:

🌐 One of:

  • "public" (默认): 渲染的媒体可以通过云存储 URL 公共访问。
  • "private":渲染的媒体不对公众开放,但对于拥有正确权限的人,在 GCP 项目内是可用的。

--force-bucket-name

指定用于输出的特定存储桶名称。生成的 Google Cloud Storage URL 将采用 gs://{bucket-name}/renders/{render-id}/{file-name} 格式。如果未设置,Remotion 将根据区域选择合适的存储桶使用。

🌐 Specify a specific bucket name to be used for the output. The resulting Google Cloud Storage URL will be in the format gs://{bucket-name}/renders/{render-id}/{file-name}. If not set, Remotion will choose the right bucket to use based on the region.

--concurrency

How many CPU threads to use. Minimum 1. The maximum is the amount of threads you have (In Node.JS os.cpus().length). You can also provide a percentage value (e.g. 50%).
note

在 v4.0.76 之前,这默认是“100%”。现在它已与其他服务器端渲染 API 对齐。

--jpeg-quality

用于 JPEG 渲染质量的 0 到 100 之间的值 (/docs/config#setjpegquality)。在渲染 PNG 帧时不起作用。

--image-format

The image format to use when rendering frames for a video. Must be one of "png", "jpeg", "none". Default: "jpeg". JPEG is faster, but does not support transparency.

--scale

按你传入的因子缩放输出帧。 例如,一个 1280x720 像素的帧在缩放因子为 1.5 时将变为 1920x1080 像素。矢量元素如字体和 HTML 标记将以更多细节呈现。

--env-file

Specify a location for a dotenv file. Default .env.

--out-name

存储在云存储桶中的媒体输出文件名。默认情况下,它是 out 加上相应的文件扩展名,例如:out.mp4。必须符合 /([0-9a-zA-Z-!_.*'()/]+)/g

🌐 The file name of the media output as stored in the Cloud Storage bucket. By default, it is out plus the appropriate file extension, for example: out.mp4. Must match /([0-9a-zA-Z-!_.*'()/]+)/g.

--cloud-run-url

指定应使用哪个服务的 URL 来执行渲染。你必须设置 cloud-run-urlservice-name,但不能同时设置两者

🌐 Specify the url of the service which should be used to perform the render. You must set either cloud-run-url or service-name, but not both

--service-name

指定应使用的用于执行渲染的服务名称。此名称与区域结合使用以确定服务端点,因为相同的服务名称可能存在于多个区域。你必须设置 cloud-run-urlservice-name,但不能同时设置两者

🌐 Specify the name of the service which should be used to perform the render. This is used in conjunction with the region to determine the service endpoint, as the same service name can exist across multiple regions. You must set either cloud-run-url or service-name, but not both

--codec

h264h265pngvp8mp3aacwavprores。如果你不提供 --codec,它将使用 h264

--audio-codec

Set the format of the audio that is embedded in the video. Not all codec and audio codec combinations are supported and certain combinations require a certain file extension and container format. See the table in the docs to see possible combinations.

--audio-bitrate

Specify the target bitrate for the generated video. The syntax for FFmpeg's -b:a parameter should be used. FFmpeg may encode the video in a way that will not result in the exact audio bitrate specified. Example values: 512K for 512 kbps, 1M for 1 Mbps. Default: 320k

--video-bitrate

Specify the target bitrate for the generated video. The syntax for FFmpeg's-b:v parameter should be used. FFmpeg may encode the video in a way that will not result in the exact video bitrate specified. Example values: 512K for 512 kbps, 1M for 1 Mbps.

--webhook

你的 webhook URL,将在渲染进度更新时被调用。这将通过 POST 请求发送。

🌐 Your webhook URL that will be called with the progress of the render. This will be sent using a POST request.

示例:

🌐 Example:

--webhook=https://example.com/webhook

该 webhook 将接收一个包含 JSON 正文的 POST 请求,内容如下:

🌐 The webhook will receive a POST request with a JSON body containing:

{
  "progress": 0.1,
  "renderedFrames": 100,
  "encodedFrames": 100,
  "renderId": "1234567890",
  "projectId": "1234567890"
}

--render-id-override

为渲染提供一个具体的渲染ID。否则将生成一个随机的ID。

🌐 Provide a specific render ID for the render. Otherwise a random one will be generated.

示例:

🌐 Example:

--render-id-override=my-custom-render-id
note

你将负责确保渲染 ID 是唯一的,否则它将覆盖具有相同配置渲染 ID 的现有渲染。

--prores-profile

Set the ProRes profile. This option is only valid if the codec has been set to prores. Possible values: "4444-xq", "4444", "hq", "standard", "light", "proxy". Default: "hq". See here for an explanation of possible values.

--x264-preset

Sets a x264 preset profile. Only applies to videos rendered with h264 codec.
Possible values: superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo.
Default: medium

--crf

设置输出的恒定码率因子 (CRF)。最小值为 0。如果你想保持最佳质量而不太关心文件大小,请使用此码率控制模式。

--pixel-format

Sets the pixel format in FFmpeg. See the FFmpeg docs for an explanation. Acceptable values: "yuv420p", "yuva420p", "yuv422p", "yuv444p", "yuv420p10le", "yuv422p10le", "yuv444p10le", "yuva444p10le".

--every-nth-frame

This option may only be set when rendering GIFs. It determines how many frames are rendered, while the other ones get skipped in order to lower the FPS of the GIF. For example, if the fps is 30, and everyNthFrame is 2, the FPS of the GIF is 15.

例如,仅每隔第二帧、每隔第三帧,依此类推。仅适用于渲染 GIF。在这里查看更多详情。

🌐 For example only every second frame, every third frame and so on. Only works for rendering GIFs. See here for more details.

--number-of-gif-loops

Allows you to set the number of loops as follows:
  • null (or omitting in the CLI) plays the GIF indefinitely.
  • 0 disables looping
  • 1 loops the GIF once (plays twice in total)
  • 2 loops the GIF twice (plays three times in total) and so on.

--frames

Render a subset of a video. Pass a single number to render a still, or a range (e.g. 0-9) to render a subset of frames. Pass 100- to render from frame 100 to the end.

--media-cache-size-in-bytesv4.0.352

Specify the maximum size of the cache that <Video> and <Audio> from @remotion/media may use combined, in bytes.
The default is half of the available system memory when the render starts.

--offthreadvideo-cache-size-in-bytesv4.0.23

From v4.0, Remotion has a cache for <OffthreadVideo> frames. The default is null, corresponding to half of the system memory available when the render starts.
This option allows to override the size of the cache. The higher it is, the faster the render will be, but the more memory will be used.
The used value will be printed when running in verbose mode.
Default: null

--offthreadvideo-video-threadsv4.0.261

The number of threads that<OffthreadVideo> can start to extract frames. The default is 2. Increase carefully, as too many threads may cause instability.

--color-spacev4.0.28

Color space to use for the video. Acceptable values: "default"(default since 5.0), "bt601" (same as "default", since v4.0.424), "bt709" (since v4.0.28), "bt2020-ncl" (since v4.0.88), "bt2020-cl" (since v4.0.88), .
For best color accuracy, it is recommended to also use "png" as the image format to have accurate color transformations throughout.
Only since v4.0.83, colorspace conversion is actually performed, previously it would only tag the metadata of the video.

--metadatav4.0.216

Metadata to be embedded in the video. See here for which metadata is accepted.
The parameter must be in the format of --metadata key=value and can be passed multiple times.