<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://freemwiki.com/index.php?action=history&amp;feed=atom&amp;title=Openai%2F6912b0ee-f888-8005-bad4-23801314352b</id>
	<title>Openai/6912b0ee-f888-8005-bad4-23801314352b - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://freemwiki.com/index.php?action=history&amp;feed=atom&amp;title=Openai%2F6912b0ee-f888-8005-bad4-23801314352b"/>
	<link rel="alternate" type="text/html" href="https://freemwiki.com/index.php?title=Openai/6912b0ee-f888-8005-bad4-23801314352b&amp;action=history"/>
	<updated>2026-04-15T06:37:02Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.3</generator>
	<entry>
		<id>https://freemwiki.com/index.php?title=Openai/6912b0ee-f888-8005-bad4-23801314352b&amp;diff=592961&amp;oldid=prev</id>
		<title>Lukegao: 导入ChatGPT Share对话 6912b0ee...</title>
		<link rel="alternate" type="text/html" href="https://freemwiki.com/index.php?title=Openai/6912b0ee-f888-8005-bad4-23801314352b&amp;diff=592961&amp;oldid=prev"/>
		<updated>2025-11-11T05:32:37Z</updated>

		<summary type="html">&lt;p&gt;导入ChatGPT Share对话 6912b0ee...&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
{{DISPLAYTITLE:Motion amplification explanation}}&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;This conversation was exported from ChatGPT&amp;#039;&amp;#039;&amp;lt;ref&amp;gt;{{cite web|title=Motion amplification explanation|url=https://chatgpt.com/share/6912b0ee-f888-8005-bad4-23801314352b|publisher=ChatGPT|access-date=2025-11-11}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== gpt-5 ==&lt;br /&gt;
=== User: explain ===&lt;br /&gt;
explain &lt;br /&gt;
            # 2. 运动幅度增强（亮度保护核心算法）&lt;br /&gt;
            if motion_amplitude &amp;gt; 1.0:&lt;br /&gt;
                base_latent = concat_latent_image[:, :, 0:1]      # 首帧&lt;br /&gt;
                gray_latent = concat_latent_image[:, :, 1:]       # 灰帧&lt;br /&gt;
                &lt;br /&gt;
                diff = gray_latent - base_latent&lt;br /&gt;
                diff_mean = diff.mean(dim=(1, 3, 4), keepdim=True)&lt;br /&gt;
                diff_centered = diff - diff_mean&lt;br /&gt;
                scaled_latent = base_latent + diff_centered * motion_amplitude + diff_mean&lt;br /&gt;
                &lt;br /&gt;
                # Clamp &amp;amp; 组合&lt;br /&gt;
                scaled_latent = torch.clamp(scaled_latent, -6, 6)&lt;br /&gt;
                concat_latent_image = torch.cat([base_latent, scaled_latent], dim=2)&lt;br /&gt;
&lt;br /&gt;
=== Assistant: This block amplifies motion in a latent video representation while preserving brightness. ===&lt;br /&gt;
This block amplifies motion in a latent video representation while preserving brightness. Step-by-step:&lt;br /&gt;
# Condition if motion_amplitude &amp;gt; 1.0: → Only applies enhancement if requested amplitude exceeds normal motion.&lt;br /&gt;
# Frame separation ``&amp;lt;code&amp;gt;python base_latent = concat_latent_image[:, :, 0:1]  # first frame gray_latent = concat_latent_image[:, :, 1:]   # subsequent frames &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt; The latent tensor holds multiple frames; first frame acts as reference.&lt;br /&gt;
# Difference extraction &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt;python diff = gray_latent - base_latent diff_mean = diff.mean(dim=(1, 3, 4), keepdim=True) diff_centered = diff - diff_mean &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt; Computes motion signal (diff) between frames, then removes its mean over spatial dimensions to isolate motion (without brightness bias).&lt;br /&gt;
# Scaling motion &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt;python scaled_latent = base_latent + diff_centered * motion_amplitude + diff_mean &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt; Multiplies the motion component by motion_amplitude, keeping the mean brightness unchanged (+ diff_mean restores global luminance).&lt;br /&gt;
# Clamp and recombine &amp;lt;/code&amp;gt;`&amp;lt;code&amp;gt;python scaled_latent = torch.clamp(scaled_latent, -6, 6) concat_latent_image = torch.cat([base_latent, scaled_latent], dim=2) &amp;lt;/code&amp;gt;`` Limits latent values to avoid over-saturation, then reassembles the frames.&lt;br /&gt;
&lt;br /&gt;
Effect:&lt;br /&gt;
Motion contrast (changes between frames) is intensified, but brightness consistency (mean value) is preserved — that’s the “亮度保护核心算法” (brightness protection core algorithm).&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:ChatGPT Shares]]&lt;br /&gt;
[[Category:openai]]&lt;/div&gt;</summary>
		<author><name>Lukegao</name></author>
	</entry>
</feed>