V 4.0.2(2025.09.23)

1. 新增原创导入提示词
2. 新增 MJ Video 批量设置基础设置
3. 新增原创设置通用前缀和通用后缀
4. 添加可灵的图转视频和视频延长
This commit is contained in:
lq1405 2025-09-23 15:20:47 +08:00
parent 7a16f02673
commit 2ce5409ec7
43 changed files with 4320 additions and 348 deletions

153
OBFUSCATION_README.md Normal file
View File

@ -0,0 +1,153 @@
# Renderer 代码混淆配置说明
## ⚠️ 白屏问题解决方案
如果使用 `npm run build:win:obfuscated` 出现白屏,这是因为之前的混淆配置过于激进。现在已修复并提供更安全的配置。
### 🛠️ 已修复的问题
- **移除所有unsafe选项**: 避免破坏Vue响应式系统
- **保留函数名和类名**: 确保Vue组件正常工作
- **关闭属性名混淆**: 防止破坏Vue的内部机制
- **保留更多关键标识符**: 包括Vue、Pinia、Naive UI相关的所有重要名称
## 🎯 推荐使用方案
### 安全混淆构建 (推荐)
```bash
npm run build:win:safe
```
这个命令使用新的安全混淆配置,提供基础保护的同时确保应用正常运行。
### 普通构建 (无混淆)
```bash
npm run build:win
```
如果不需要混淆,使用普通构建即可。
## 混淆功能特性
### 安全混淆级别 (OBFUSCATE=true)
- ✅ **变量名混淆**: 将局部变量名转换为短字符
- ✅ **代码压缩**: 移除空格、换行、注释
- ✅ **Console移除**: 移除console.log等调试输出
- ✅ **文件名混淆**: 生成短哈希文件名
- ✅ **保留Vue生态**: 完全兼容Vue、Naive UI、Pinia
- ❌ 属性名混淆: 已禁用,避免破坏响应式
- ❌ Unsafe优化: 已禁用,确保稳定性
## 构建命令对比
| 命令 | 混淆级别 | 安全性 | 兼容性 | 推荐场景 |
|------|----------|--------|--------|----------|
| `npm run build:win` | 无 | 低 | 100% | 开发测试 |
| `npm run build:win:safe` | 安全混淆 | 中等 | 99% | **生产推荐** |
## 验证构建结果
### 1. 检查应用启动
```bash
npm run build:unpack:safe
npm start
```
### 2. 检查混淆效果
查看 `out/renderer/assets/*.js` 文件:
- 变量名应该是 a, b, c 等短字符
- 代码应该压缩在少数几行
- 不应有console.log输出
### 3. 功能测试
- 所有页面正常显示
- Vue组件响应式正常
- Naive UI组件正常工作
- Electron API正常调用
## 安全保护措施
### 保留的重要标识符
```javascript
// Vue 核心
'Vue', 'vue', 'reactive', 'ref', 'computed', 'watch'
// Vue Router
'router', 'route', 'useRouter', 'useRoute'
// Pinia
'pinia', 'store', 'useStore', 'defineStore'
// Naive UI
'naive', 'NaiveUi', 'useDialog', 'useMessage'
// Electron
'ElectronAPI', 'ipcRenderer', 'contextBridge'
```
### 禁用的危险选项
```javascript
// 这些选项会破坏Vue已全部禁用
unsafe: false,
unsafe_comps: false,
unsafe_Function: false,
unsafe_methods: false,
properties: false // 属性名混淆
```
## 故障排除
### 如果仍然白屏
1. **使用普通构建**:
```bash
npm run build:win
```
2. **检查控制台错误**:
打开开发者工具查看具体错误信息
3. **分步测试**:
```bash
npm run build:unpack:safe # 先构建不打包
npm start # 测试是否正常
```
### 如果某些功能异常
1. **检查是否有动态属性访问**:
`obj[dynamicKey]` 可能需要特殊处理
2. **添加保留名称**:
`reserved` 数组中添加相关标识符
3. **临时禁用混淆**:
```bash
npm run build:win # 使用普通构建
```
## 技术细节
### 混淆配置要点
```javascript
terserOptions: {
compress: {
drop_console: true, // 移除console
keep_fargs: true, // 保留函数参数Vue需要
keep_classnames: true, // 保留类名Vue组件
keep_fnames: true, // 保留函数名Vue方法
unsafe: false // 禁用unsafe优化
},
mangle: {
properties: false, // 不混淆属性名
reserved: [...] // 大量保留名称
}
}
```
### 为什么这样配置
- **Vue响应式系统**依赖属性名不被改变
- **组件系统**需要保留类名和函数名
- **动态属性访问**在混淆后可能失效
- **第三方库集成**需要保留特定标识符
## 结论
新的安全混淆配置在保护代码的同时确保应用稳定运行。虽然混淆强度相比之前有所降低,但对于大多数安全需求已经足够,且避免了白屏等兼容性问题。
**推荐使用**: `npm run build:win:safe`

View File

@ -6,53 +6,142 @@ import Components from 'unplugin-vue-components/vite'
import tsconfigPaths from 'vite-tsconfig-paths' import tsconfigPaths from 'vite-tsconfig-paths'
import { NaiveUiResolver } from 'unplugin-vue-components/resolvers' import { NaiveUiResolver } from 'unplugin-vue-components/resolvers'
export default defineConfig({ export default defineConfig(({ command }) => {
main: { const isProduction = command === 'build'
resolve: { const enableObfuscation = process.env.OBFUSCATE === 'true'
alias: {
'@': resolve('src'), return {
'@renderer': resolve('src/renderer/src') main: {
resolve: {
alias: {
'@': resolve('src'),
'@renderer': resolve('src/renderer/src')
}
},
plugins: [externalizeDepsPlugin(), bytecodePlugin(), tsconfigPaths()]
},
preload: {
plugins: [externalizeDepsPlugin(), bytecodePlugin()],
resolve: {
alias: {
'@': resolve('src'),
'@renderer': resolve('src/renderer/src')
}
} }
}, },
plugins: [externalizeDepsPlugin(), bytecodePlugin(), tsconfigPaths()] renderer: {
}, resolve: {
preload: { alias: {
plugins: [externalizeDepsPlugin(), bytecodePlugin()], '@': resolve('src'),
resolve: { '@renderer': resolve('src/renderer/src')
alias: { }
'@': resolve('src'), },
'@renderer': resolve('src/renderer/src') plugins: [
bytecodePlugin(),
vue({
template: {
compilerOptions: {
// 将webview标签标记为自定义元素
isCustomElement: (tag) => ['webview'].includes(tag)
}
}
}),
AutoImport({
imports: [
'vue',
{
'naive-ui': ['useDialog', 'useMessage', 'useNotification', 'useLoadingBar']
}
]
}),
Components({
resolvers: [NaiveUiResolver()]
})
],
build: {
// 为生产环境添加代码混淆和优化
...(isProduction ? {
minify: 'terser',
terserOptions: {
compress: {
// 安全的压缩选项 - 避免破坏Vue响应式系统
drop_console: enableObfuscation, // 根据环境变量决定是否移除console
drop_debugger: true, // 移除debugger
pure_funcs: enableObfuscation ? ['console.log', 'console.info', 'console.debug', 'console.warn'] : [],
passes: 1, // 只进行一轮压缩,避免过度优化
keep_fargs: true, // 保留函数参数对Vue很重要
// 关闭所有unsafe选项确保Vue和响应式系统正常工作
unsafe: false,
unsafe_comps: false,
unsafe_Function: false,
unsafe_math: false,
unsafe_symbols: false,
unsafe_methods: false,
unsafe_proto: false,
unsafe_regexp: false,
unsafe_undefined: false,
// 保留一些对Vue重要的功能
keep_classnames: true, // 保留类名
keep_fnames: true, // 保留函数名对Vue组件很重要
// 移除条件编译,避免影响动态代码
global_defs: {}
},
mangle: enableObfuscation ? {
// 更保守的混淆策略
properties: false, // 完全关闭属性名混淆避免破坏Vue
// 保留更多重要的标识符
reserved: [
// Vue 核心
'Vue', 'vue', 'VNode', 'Component', 'Directive', 'Plugin', 'App', 'app',
'reactive', 'ref', 'computed', 'watch', 'watchEffect', 'onMounted', 'onUnmounted',
'provide', 'inject', 'createApp', 'mount', 'unmount', 'nextTick',
// Vue Router
'router', 'route', 'useRouter', 'useRoute', 'RouterView', 'RouterLink',
// Pinia
'pinia', 'store', 'useStore', 'defineStore', 'storeToRefs',
// Naive UI 相关
'naive', 'NaiveUi', 'n-', 'NButton', 'NInput', 'NForm', 'NSelect', 'NModal',
'NCard', 'NLayout', 'NSpace', 'NGrid', 'NGridItem', 'NIcon', 'NText',
'useDialog', 'useMessage', 'useNotification', 'useLoadingBar',
// Electron 相关
'ElectronAPI', 'electron', 'ipcRenderer', 'contextBridge', 'electronAPI',
// 通用保留
'require', 'exports', 'module', '__dirname', '__filename',
'window', 'document', 'global', 'process', 'Buffer',
// 保留以$开头的Vue特殊属性
'$', '$emit', '$props', '$attrs', '$slots', '$refs', '$parent', '$root',
// 保留一些常用的方法名
'toString', 'valueOf', 'hasOwnProperty', 'isPrototypeOf',
// 保留事件相关
'addEventListener', 'removeEventListener', 'dispatchEvent'
]
} : false,
format: {
// 更安全的格式化选项
comments: false, // 移除注释
beautify: false, // 不美化代码
// 使用更保守的格式化设置
ascii_only: false, // 不强制ASCII避免中文问题
wrap_iife: false, // 不包装IIFE避免作用域问题
semicolons: true // 保留分号,确保代码正确性
}
},
rollupOptions: {
output: {
// 文件名混淆保持简单
chunkFileNames: enableObfuscation ?
'assets/c[hash:8].js' :
'assets/[name]-[hash].js',
entryFileNames: enableObfuscation ?
'assets/e[hash:8].js' :
'assets/[name]-[hash].js',
assetFileNames: enableObfuscation ?
'assets/a[hash:8].[ext]' :
'assets/[name]-[hash].[ext]'
}
}
} : {})
} }
} }
},
renderer: {
resolve: {
alias: {
'@': resolve('src'),
'@renderer': resolve('src/renderer/src')
}
},
plugins: [
bytecodePlugin(),
vue({
template: {
compilerOptions: {
// 将webview标签标记为自定义元素
isCustomElement: (tag) => ['webview'].includes(tag)
}
}
}),
AutoImport({
imports: [
'vue',
{
'naive-ui': ['useDialog', 'useMessage', 'useNotification', 'useLoadingBar']
}
]
}),
Components({
resolvers: [NaiveUiResolver()]
})
]
} }
}) })

905
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
{ {
"name": "laitool-pro", "name": "laitool-pro",
"productName": "LaiToolPro", "productName": "LaiToolPro",
"version": "v4.0.1", "version": "v4.0.2",
"description": "来推 Pro - 一款集音频处理、文案生成、图片生成、视频生成等功能于一体的多合一AI工具软件。", "description": "来推 Pro - 一款集音频处理、文案生成、图片生成、视频生成等功能于一体的多合一AI工具软件。",
"main": "./out/main/index.js", "main": "./out/main/index.js",
"author": "xiangbei", "author": "xiangbei",
@ -15,11 +15,16 @@
"start": "electron-vite preview", "start": "electron-vite preview",
"dev": "electron-vite dev", "dev": "electron-vite dev",
"build": "npm run typecheck && electron-vite build", "build": "npm run typecheck && electron-vite build",
"build:safe": "cross-env OBFUSCATE=true npm run typecheck && cross-env OBFUSCATE=true electron-vite build",
"postinstall": "electron-builder install-app-deps", "postinstall": "electron-builder install-app-deps",
"build:unpack": "npm run build && electron-builder --dir", "build:unpack": "npm run build && electron-builder --dir",
"build:unpack:safe": "npm run build:safe && electron-builder --dir",
"build:win": "npm run build && electron-builder --win", "build:win": "npm run build && electron-builder --win",
"build:win:safe": "npm run build:safe && electron-builder --win",
"build:mac": "npm run build && electron-builder --mac", "build:mac": "npm run build && electron-builder --mac",
"build:linux": "npm run build && electron-builder --linux" "build:mac:safe": "npm run build:safe && electron-builder --mac",
"build:linux": "npm run build && electron-builder --linux",
"build:linux:obfuscated": "npm run build:obfuscated && electron-builder --linux"
}, },
"dependencies": { "dependencies": {
"@alicloud/alimt20181012": "^1.3.0", "@alicloud/alimt20181012": "^1.3.0",
@ -52,15 +57,20 @@
"@electron-toolkit/eslint-config-prettier": "3.0.0", "@electron-toolkit/eslint-config-prettier": "3.0.0",
"@electron-toolkit/eslint-config-ts": "^3.0.0", "@electron-toolkit/eslint-config-ts": "^3.0.0",
"@electron-toolkit/tsconfig": "^1.0.1", "@electron-toolkit/tsconfig": "^1.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@types/node": "^22.13.4", "@types/node": "^22.13.4",
"@vitejs/plugin-vue": "^5.2.1", "@vitejs/plugin-vue": "^5.2.1",
"cross-env": "^10.0.0",
"electron": "^34.2.0", "electron": "^34.2.0",
"electron-builder": "^25.1.8", "electron-builder": "^25.1.8",
"electron-vite": "^3.0.0", "electron-vite": "^3.0.0",
"eslint": "^9.20.1", "eslint": "^9.20.1",
"eslint-plugin-vue": "^9.32.0", "eslint-plugin-vue": "^9.32.0",
"javascript-obfuscator": "^4.1.1",
"naive-ui": "^2.41.0", "naive-ui": "^2.41.0",
"prettier": "^3.5.1", "prettier": "^3.5.1",
"rollup-plugin-obfuscator": "^1.1.0",
"terser": "^5.44.0",
"typescript": "^5.7.3", "typescript": "^5.7.3",
"unplugin-auto-import": "^19.1.2", "unplugin-auto-import": "^19.1.2",
"unplugin-vue-components": "^28.4.1", "unplugin-vue-components": "^28.4.1",

View File

@ -158,7 +158,7 @@ export function GetImageTypeFromBase64(base64String: string): string {
* @param url URL * @param url URL
* @returns Promise<string> PromiseMIME类型的base64字符串 * @returns Promise<string> PromiseMIME类型的base64字符串
*/ */
export function GetImageBase64(url: string): Promise<string> { export function GetImageBase64(url: string, noPrefix: boolean = false): Promise<string> {
if (!url) { if (!url) {
return Promise.reject(t("{data} 不能为空", { return Promise.reject(t("{data} 不能为空", {
data: 'URL' data: 'URL'
@ -173,8 +173,12 @@ export function GetImageBase64(url: string): Promise<string> {
response.on('data', (chunk) => data.push(chunk)) response.on('data', (chunk) => data.push(chunk))
response.on('end', () => { response.on('end', () => {
const buffer = Buffer.concat(data) const buffer = Buffer.concat(data)
if (noPrefix) {
return resolve(buffer.toString('base64'))
}
const base64Data = `data:${mimeType};base64,${buffer.toString('base64')}` const base64Data = `data:${mimeType};base64,${buffer.toString('base64')}`
resolve(base64Data) resolve(base64Data)
}) })
}) })
.on('error', (err) => reject(err)) .on('error', (err) => reject(err))
@ -186,6 +190,9 @@ export function GetImageBase64(url: string): Promise<string> {
reject(err) reject(err)
} else { } else {
const mimeType = GetMimeType(url) const mimeType = GetMimeType(url)
if (noPrefix) {
return resolve(data.toString('base64'))
}
const base64Data = `data:${mimeType};base64,${data.toString('base64')}` const base64Data = `data:${mimeType};base64,${data.toString('base64')}`
resolve(base64Data) resolve(base64Data)
} }

View File

@ -1,20 +1,11 @@
import { t } from '@/i18n' import { t } from '@/i18n'
import { AIStoryboardMasterAIEnhance } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterAIEnhance'
import { AIStoryboardMasterGeneral } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterGeneral'
import { AIStoryboardMasterMJAncientStyle } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterMJAncientStyle'
import { AIStoryboardMasterOptimize } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterOptimize'
import { AIStoryboardMasterScenePrompt } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterScenePrompt'
import { AIStoryboardMasterSDEnglish } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterSDEnglish'
import { AIStoryboardMasterSingleFrame } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterSingleFrame'
import { AIStoryboardMasterSingleFrameWithCharacter } from './aiPrompt/bookStoryboardPrompt/aiStoryboardMasterSingleFrameWithCharacter'
import { AIStoryboardMasterSpecialEffects } from './aiPrompt/bookStoryboardPrompt/aitoryboardMasterSpecialEffects'
export type AiInferenceModelModel = { export type AiInferenceModelModel = {
value: string // AI选项值 value: string // AI选项值
label: string // AI选项标签 label: string // AI选项标签
hasExample: boolean // 是否有示例 hasExample: boolean // 是否有示例
mustCharacter: boolean // 是否必须包含角色 mustCharacter: boolean // 是否必须包含角色
requestBody: OpenAIRequest.Request // AI请求体 requestBody: string | OpenAIRequest.Request // AI请求体
allAndExampleContent: string | null // 所有和示例内容 allAndExampleContent: string | null // 所有和示例内容
} }
@ -28,7 +19,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】场景提示大师上下文-提示词不包含人物)'), label: t('【LaiTool】场景提示大师上下文-提示词不包含人物)'),
hasExample: false, hasExample: false,
mustCharacter: false, mustCharacter: false,
requestBody: AIStoryboardMasterScenePrompt, requestBody: "AIStoryboardMasterScenePrompt",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -36,7 +27,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-特效增强版(上下文-人物场景固定)'), label: t('【LaiTool】分镜大师-特效增强版(上下文-人物场景固定)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterSpecialEffects, requestBody: "AIStoryboardMasterSpecialEffects",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -44,7 +35,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)'), label: t('【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterGeneral, requestBody: "AIStoryboardMasterGeneral",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -52,7 +43,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)'), label: t('【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterAIEnhance, requestBody: "AIStoryboardMasterAIEnhance",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -60,7 +51,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-全能优化版(上下文-人物固定)'), label: t('【LaiTool】分镜大师-全能优化版(上下文-人物固定)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterOptimize, requestBody: "AIStoryboardMasterOptimize",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -68,7 +59,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)'), label: t('【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterMJAncientStyle, requestBody: "AIStoryboardMasterMJAncientStyle",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -76,7 +67,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)'), label: t('【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterSDEnglish, requestBody: "AIStoryboardMasterSDEnglish",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -84,7 +75,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)'), label: t('【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)'),
hasExample: false, hasExample: false,
mustCharacter: false, mustCharacter: false,
requestBody: AIStoryboardMasterSingleFrame, requestBody: "AIStoryboardMasterSingleFrame",
allAndExampleContent: null allAndExampleContent: null
}, },
{ {
@ -92,7 +83,7 @@ export const aiOptionsData: AiInferenceModelModel[] = [
label: t('【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物场景固定)'), label: t('【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物场景固定)'),
hasExample: false, hasExample: false,
mustCharacter: true, mustCharacter: true,
requestBody: AIStoryboardMasterSingleFrameWithCharacter, requestBody: "AIStoryboardMasterSingleFrameWithCharacter",
allAndExampleContent: null allAndExampleContent: null
} }
] ]

View File

@ -0,0 +1,39 @@
import { t } from "@/i18n";
import { AIStoryboardMasterAIEnhance } from "./aiStoryboardMasterAIEnhance";
import { AIStoryboardMasterGeneral } from "./aiStoryboardMasterGeneral";
import { AIStoryboardMasterMJAncientStyle } from "./aiStoryboardMasterMJAncientStyle";
import { AIStoryboardMasterOptimize } from "./aiStoryboardMasterOptimize";
import { AIStoryboardMasterScenePrompt } from "./aiStoryboardMasterScenePrompt";
import { AIStoryboardMasterSDEnglish } from "./aiStoryboardMasterSDEnglish";
import { AIStoryboardMasterSingleFrame } from "./aiStoryboardMasterSingleFrame";
import { AIStoryboardMasterSingleFrameWithCharacter } from "./aiStoryboardMasterSingleFrameWithCharacter";
import { AIStoryboardMasterSpecialEffects } from "./aitoryboardMasterSpecialEffects";
// 根据 value 返回对应的分镜预设请求体对象
// value: 预设类型字符串
// 返回: OpenAIRequest.Request 对象
// 如果未找到对应类型会抛出错误(带有国际化提示)
export function GetAIPromptRequestBodyByValue(value: string): OpenAIRequest.Request {
switch (value) {
case "AIStoryboardMasterScenePrompt":
return AIStoryboardMasterScenePrompt;
case "AIStoryboardMasterSpecialEffects":
return AIStoryboardMasterSpecialEffects;
case "AIStoryboardMasterGeneral":
return AIStoryboardMasterGeneral;
case "AIStoryboardMasterAIEnhance":
return AIStoryboardMasterAIEnhance;
case "AIStoryboardMasterOptimize":
return AIStoryboardMasterOptimize;
case "AIStoryboardMasterMJAncientStyle":
return AIStoryboardMasterMJAncientStyle;
case "AIStoryboardMasterSDEnglish":
return AIStoryboardMasterSDEnglish;
case "AIStoryboardMasterSingleFrame":
return AIStoryboardMasterSingleFrame;
case "AIStoryboardMasterSingleFrameWithCharacter":
return AIStoryboardMasterSingleFrameWithCharacter;
default:
throw new Error(t('未找到对应的分镜预设的请求数据,请检查'))
}
}

View File

@ -1,6 +1,34 @@
import { t } from "@/i18n" import { t } from "@/i18n"
export const apiDefineData = [ // API MJ URL 配置接口
interface ApiMjUrl {
imagine: string;
describe: string;
video?: string;
update_file: string;
once_get_task: string;
query_url?: string;
}
// API D3 URL 配置接口
interface ApiD3Url {
image: string;
}
// API 定义数据项接口
export interface APIProviderDataItem {
label: string;
value: string;
id?: string;
gpt_url?: string;
base_url: string;
mj_url?: ApiMjUrl;
d3_url?: ApiD3Url;
buy_url?: string;
isPackage?: boolean;
}
export const apiDefineData: APIProviderDataItem[] = [
{ {
label: t('LAI API - 香港'), label: t('LAI API - 香港'),
value: 'b44c6f24-59e4-4a71-b2c7-3df0c4e35e65', value: 'b44c6f24-59e4-4a71-b2c7-3df0c4e35e65',

View File

@ -30,8 +30,8 @@ interface ISoftwareData {
} }
export const SoftwareData: ISoftwareData = { export const SoftwareData: ISoftwareData = {
version: 'V4.0.1', version: 'V4.0.2',
date: '2025-09-21', date: '2025-09-23',
systemInfo: { systemInfo: {
documentationUrl: 'https://rvgyir5wk1c.feishu.cn/wiki/WdaWwAfDdiLOnjkywIgcaQoKnog', documentationUrl: 'https://rvgyir5wk1c.feishu.cn/wiki/WdaWwAfDdiLOnjkywIgcaQoKnog',
updateUrl: 'https://pvwu1oahp5m.feishu.cn/docx/CAjGdTDlboJ3nVx0cQccOuNHnvd', updateUrl: 'https://pvwu1oahp5m.feishu.cn/docx/CAjGdTDlboJ3nVx0cQccOuNHnvd',

View File

@ -115,6 +115,8 @@ export enum BookBackTaskType {
LUMA_VIDEO = 'luma_video', LUMA_VIDEO = 'luma_video',
// kling 生成视频 // kling 生成视频
KLING_VIDEO = 'kling_video', KLING_VIDEO = 'kling_video',
// kling 视频拓展
KLING_VIDEO_EXTEND = 'kling_video_extend',
// MJ Video // MJ Video
MJ_VIDEO = 'mj_video', MJ_VIDEO = 'mj_video',
// MJ VIDEO EXTEND 视频拓展 // MJ VIDEO EXTEND 视频拓展

View File

@ -68,6 +68,7 @@ export enum ResponseMessageType {
RUNWAY_VIDEO = 'RUNWAY_VIDEO', // Runway生成视频 RUNWAY_VIDEO = 'RUNWAY_VIDEO', // Runway生成视频
LUMA_VIDEO = 'LUMA_VIDEO', // Luma生成视频 LUMA_VIDEO = 'LUMA_VIDEO', // Luma生成视频
KLING_VIDEO = 'KLING_VIDEO', // Kling生成视频 KLING_VIDEO = 'KLING_VIDEO', // Kling生成视频
KLING_VIDEO_EXTEND = 'KLING_VIDEO_EXTEND', // Kling生成视频拓展
MJ_VIDEO = 'MJ_VIDEO', // MJ生成视频 MJ_VIDEO = 'MJ_VIDEO', // MJ生成视频
MJ_VIDEO_EXTEND = 'MJ_VIDEO_EXTEND', // MJ生成视频拓展 MJ_VIDEO_EXTEND = 'MJ_VIDEO_EXTEND', // MJ生成视频拓展
VIDEO_SUCESS = 'VIDEO_SUCESS' //视频生成成功 VIDEO_SUCESS = 'VIDEO_SUCESS' //视频生成成功

View File

@ -11,6 +11,8 @@ export enum ImageToVideoModels {
LUMA = 'LUMA', LUMA = 'LUMA',
/** 可灵生成视频 */ /** 可灵生成视频 */
KLING = 'KLING', KLING = 'KLING',
/** 可灵视频拓展 */
KLING_VIDEO_EXTEND = 'KLING_VIDEO_EXTEND',
/** Pika 生成视频 */ /** Pika 生成视频 */
PIKA = 'PIKA', PIKA = 'PIKA',
/** MJ 图转视频 */ /** MJ 图转视频 */
@ -19,6 +21,7 @@ export enum ImageToVideoModels {
MJ_VIDEO_EXTEND = 'MJ_VIDEO_EXTEND' MJ_VIDEO_EXTEND = 'MJ_VIDEO_EXTEND'
} }
export const MappingTaskTypeToVideoModel = (type: BookBackTaskType | string) => { export const MappingTaskTypeToVideoModel = (type: BookBackTaskType | string) => {
switch (type) { switch (type) {
case BookBackTaskType.LUMA_VIDEO: case BookBackTaskType.LUMA_VIDEO:
@ -27,6 +30,8 @@ export const MappingTaskTypeToVideoModel = (type: BookBackTaskType | string) =>
return ImageToVideoModels.RUNWAY return ImageToVideoModels.RUNWAY
case BookBackTaskType.KLING_VIDEO: case BookBackTaskType.KLING_VIDEO:
return ImageToVideoModels.KLING return ImageToVideoModels.KLING
case BookBackTaskType.KLING_VIDEO_EXTEND:
return ImageToVideoModels.KLING_VIDEO_EXTEND
case BookBackTaskType.MJ_VIDEO: case BookBackTaskType.MJ_VIDEO:
return ImageToVideoModels.MJ_VIDEO return ImageToVideoModels.MJ_VIDEO
case BookBackTaskType.MJ_VIDEO_EXTEND: case BookBackTaskType.MJ_VIDEO_EXTEND:
@ -103,6 +108,8 @@ export enum VideoModel {
export enum VideoStatus { export enum VideoStatus {
/** 等待 */ /** 等待 */
WAIT = 'wait', WAIT = 'wait',
/** 提交成功 */
SUBMITTED = 'submitted',
/** 处理中 */ /** 处理中 */
PROCESSING = 'processing', PROCESSING = 'processing',
/** 完成 */ /** 完成 */
@ -149,6 +156,12 @@ export enum RunwaySeconds {
//#region 可灵相关 //#region 可灵相关
/**
*
* - std
* - pro
*/
export enum KlingMode { export enum KlingMode {
/** 高性能 */ /** 高性能 */
STD = 'std', STD = 'std',
@ -156,6 +169,171 @@ export enum KlingMode {
PRO = 'pro' PRO = 'pro'
} }
/**
*
*
* @param mode
* @returns
*/
export function GetKlingModeLabel(mode: KlingMode | string) {
switch (mode) {
case KlingMode.STD:
return t('高性能 (std)')
case KlingMode.PRO:
return t('高表现 (pro)')
default:
return t('未知')
}
}
/**
*
*
* @returns UI组件
*/
export function GetKlingModeOptions() {
return [
{
label: GetKlingModeLabel(KlingMode.STD),
value: KlingMode.STD
},
{
label: GetKlingModeLabel(KlingMode.PRO),
value: KlingMode.PRO
}
]
}
/**
*
* - '5'5
* - '10'10
*/
export enum KlingDuration {
FIVE = 5,
TEN = 10
}
/**
*
*
* @param duration
* @returns
*/
export function GetKlingDurationLabel(duration: KlingDuration | string) {
switch (duration) {
case KlingDuration.FIVE:
return t('5秒')
case KlingDuration.TEN:
return t('10秒')
default:
return t('未知')
}
}
/**
*
*
* @returns UI组件
*/
export function GetKlingDurationOptions() {
return [
{
label: GetKlingDurationLabel(KlingDuration.FIVE),
value: KlingDuration.FIVE
},
{
label: GetKlingDurationLabel(KlingDuration.TEN),
value: KlingDuration.TEN
}
]
}
/**
*
* - kling-v1V1
* - kling-v1-5V1.5
* - kling-v1-6V1.6
* - kling-v2-masterV2 master
* - kling-v2-1V2.1
* - kling-v2-1-masterV2.1 master
*/
export enum KlingModelName {
/** V1 版本 */
KLING_V1 = 'kling-v1',
/** V1.5 版本 */
KLING_V1_5 = 'kling-v1-5',
/** V1.6 版本 */
KLING_V1_6 = 'kling-v1-6',
/** V2 master 版本 */
KLING_V2_MASTER = 'kling-v2-master',
/** V2.1 版本 */
KLING_V2_1 = 'kling-v2-1',
/** V2.1 master 版本 */
KLING_V2_1_MASTER = 'kling-v2-1-master'
}
/**
*
*
* @param modelName
* @returns
*/
export function GetKlingModelNameLabel(modelName: KlingModelName | string) {
switch (modelName) {
case KlingModelName.KLING_V1:
return t('Kling V1')
case KlingModelName.KLING_V1_5:
return t('Kling V1.5')
case KlingModelName.KLING_V1_6:
return t('Kling V1.6')
case KlingModelName.KLING_V2_MASTER:
return t('Kling V2 Master')
case KlingModelName.KLING_V2_1:
return t('Kling V2.1')
case KlingModelName.KLING_V2_1_MASTER:
return t('Kling V2.1 Master')
default:
return t('未知')
}
}
/**
*
*
* @returns UI组件
*/
export function GetKlingModelNameOptions() {
return [
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V1),
value: KlingModelName.KLING_V1
},
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V1_5),
value: KlingModelName.KLING_V1_5
},
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V1_6),
value: KlingModelName.KLING_V1_6
},
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V2_MASTER),
value: KlingModelName.KLING_V2_MASTER
},
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V2_1),
value: KlingModelName.KLING_V2_1
},
{
label: GetKlingModelNameLabel(KlingModelName.KLING_V2_1_MASTER),
value: KlingModelName.KLING_V2_1_MASTER
}
]
}
//#endregion //#endregion
//#region MJ Video //#region MJ Video

View File

@ -162,7 +162,10 @@ const BOOK = {
UPDATE_BOOK_TASK_DETAIL_VIDEO_MESSAGE: 'UPDATE_BOOK_TASK_DETAIL_VIDEO_MESSAGE', UPDATE_BOOK_TASK_DETAIL_VIDEO_MESSAGE: 'UPDATE_BOOK_TASK_DETAIL_VIDEO_MESSAGE',
/** MJ VIDEO 图转视频返回前端数据任务 */ /** MJ VIDEO 图转视频返回前端数据任务 */
MJ_VIDEO_TO_VIDEO_RETURN: 'MJ_VIDEO_TO_VIDEO_RETURN' MJ_VIDEO_TO_VIDEO_RETURN: 'MJ_VIDEO_TO_VIDEO_RETURN',
/** Kling 图转视频返回前端数据任务 */
KLING_IMAGE_TO_VIDEO_RETURN: 'KLING_IMAGE_TO_VIDEO_RETURN',
//#endregion //#endregion

View File

@ -1,6 +1,8 @@
import { import {
ImageToVideoModels, ImageToVideoModels,
KlingDuration,
KlingMode, KlingMode,
KlingModelName,
MJVideoBatchSize, MJVideoBatchSize,
MJVideoType, MJVideoType,
RunawayModel, RunawayModel,
@ -71,16 +73,59 @@ declare namespace BookTaskDetail {
request_model?: string // 请求的模型,快速还是慢速 request_model?: string // 请求的模型,快速还是慢速
} }
/**
* Kling
*/
type klingOptions = { type klingOptions = {
model?: string // 模型(kling-v1) /**
image: string // 图片地址必须支持Base64编码或图片URL支持.jpg / .jpeg / .png格式大小不能超过10MB分辨率不小于300*300px *
image_tail?: string // 尾帧图片地址支持Base64编码或图片URL支持.jpg / .jpeg / .png格式大小不能超过10MB分辨率不小于300*300px * - kling-v1
prompt?: string // 提示词,正向文本提示, 可选不能超过500个字符 * - kling-v1-5
negative_prompt?: string // 负面提示负向文本提示可选不能超过200个字符 * - kling-v1-6
cfg_scale?: number // 提示词相关性可选范围0-1 * - kling-v2-master
mode?: KlingMode // 生成视频的模式可选枚举值std高性能或 pro高表现 * - kling-v2-1
duration?: RunwaySeconds // 生成视频时长单位秒可选枚举值510包含尾帧的请求仅支持5秒 * - kling-v2-1-master
callback_url?: string // 回调地址,可选,生成视频完成后,会向该地址发送通知 * kling-v1
*/
model_name?: KlingModelName;
/**
* Base64编码或图片URL.jpg / .jpeg / .png格式10MB300*300px
*/
image: string;
/**
* - Base64编码或图片URL.jpg / .jpeg / .png格式10MB300*300px
*/
image_tail?: string;
/**
* 500
*/
prompt?: string;
/**
* 200
*/
negative_prompt?: string;
/**
* [0, 1]
*/
cfg_scale?: number;
/**
* std pro
*/
mode?: KlingMode;
/**
* '5' | '10'5
*/
duration?: KlingDuration;
/**
* ID使
*/
video_id?: string;
/**
* ID使
*/
task_id?: string;
} }
interface MjVideoOptions { interface MjVideoOptions {

View File

@ -2,6 +2,7 @@ import { ImageToVideoCategory } from '@/define/data/imageData'
import { ImageGenerateMode, MJRobotType, MJSpeed } from '../data/mjData' import { ImageGenerateMode, MJRobotType, MJSpeed } from '../data/mjData'
import { JianyingKeyFrameEnum } from '../enum/jianyingEnum' import { JianyingKeyFrameEnum } from '../enum/jianyingEnum'
import { ImageToVideoModels } from '@/define/enum/video' import { ImageToVideoModels } from '@/define/enum/video'
import { APIProviderDataItem } from '../data/apiData'
declare namespace SettingModal { declare namespace SettingModal {
//#region 基础设置 //#region 基础设置
@ -194,6 +195,16 @@ declare namespace SettingModal {
translationModel: string translationModel: string
} }
/**
* AI
* InferenceAISettings API
* API
*/
interface InferenceAISettingAndProvider extends InferenceAISettings {
/** API 提供商详细配置项 - 包含完整的 API 提供商信息 */
apiProviderItem: APIProviderDataItem
}
//#endregion //#endregion
//#region SD设置 //#region SD设置

View File

@ -58,6 +58,7 @@ export default {
"未知错误": 'Unknown Error', "未知错误": 'Unknown Error',
"未知类型": 'Unknown Type', "未知类型": 'Unknown Type',
"未知操作": 'Unknown Operation', "未知操作": 'Unknown Operation',
"未知状态": "Unknown Status",
"下载成功": 'Download Successful', "下载成功": 'Download Successful',
"下载失败": 'Download Failed', "下载失败": 'Download Failed',
"页面不存在": "Page Not Found", "页面不存在": "Page Not Found",
@ -244,9 +245,10 @@ export default {
"当前分镜数据的MJ图转视频参数为空或参数校验失败请检查": "Current storyboard data MJ image-to-video parameters are empty or validation failed, please check", "当前分镜数据的MJ图转视频参数为空或参数校验失败请检查": "Current storyboard data MJ image-to-video parameters are empty or validation failed, please check",
'当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试': 'Current Midjourney mode does not support video generation, please switch to MJ API or local proxy mode and try again!', '当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试': 'Current Midjourney mode does not support video generation, please switch to MJ API or local proxy mode and try again!',
'Midjourney图转视频任务执行失败失败信息如下{error}': 'Midjourney image-to-video task execution failed, error details: {error}', 'Midjourney图转视频任务执行失败失败信息如下{error}': 'Midjourney image-to-video task execution failed, error details: {error}',
"图转视频任务执行完成!": "Image-to-video task completed!",
'Midjourney图转视频任务执行完成。': 'Midjourney image-to-video task execution completed.', 'Midjourney图转视频任务执行完成。': 'Midjourney image-to-video task execution completed.',
'Midjourney图转视频任务执行中...': 'Midjourney image-to-video task executing...', 'Midjourney图转视频任务执行中...': 'Midjourney image-to-video task executing...',
'已成功提交Midjourney图转视频任务任务ID{taskId}': 'Successfully submitted Midjourney image-to-video task, Task ID: {taskId}', '已成功提交{type}图转视频任务任务ID{taskId}': 'Successfully submitted {type} image-to-video task, task ID: {taskId}',
"小说批次任务的分镜数据的转视频配置为空,请检查": "Video conversion configuration for storyboard data of novel batch task is empty, please check", "小说批次任务的分镜数据的转视频配置为空,请检查": "Video conversion configuration for storyboard data of novel batch task is empty, please check",
"分镜的图片没有全部出完,不能继续该操作!!": "Storyboard images are not all generated, cannot continue this operation!!", "分镜的图片没有全部出完,不能继续该操作!!": "Storyboard images are not all generated, cannot continue this operation!!",
"分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确": "Storyboard {name} image not found locally, cannot continue this operation, please check if the corresponding storyboard image path is correct", "分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确": "Storyboard {name} image not found locally, cannot continue this operation, please check if the corresponding storyboard image path is correct",
@ -311,8 +313,16 @@ export default {
'同步主图信息失败,{error}': 'Failed to sync main image information, {error}', '同步主图信息失败,{error}': 'Failed to sync main image information, {error}',
'该操作会将当前批次的所有分镜的提示词全部重置为空,此操作不可撤销,重置的数据不可恢复,是否继续?': 'This operation will reset all storyboards\' prompts in the current batch to empty. This action is irreversible, and reset data cannot be recovered. Do you want to continue?', '该操作会将当前批次的所有分镜的提示词全部重置为空,此操作不可撤销,重置的数据不可恢复,是否继续?': 'This operation will reset all storyboards\' prompts in the current batch to empty. This action is irreversible, and reset data cannot be recovered. Do you want to continue?',
'正在执行重置提示词任务,请稍等...': 'Resetting prompts, please wait...', '正在执行重置提示词任务,请稍等...': 'Resetting prompts, please wait...',
"重置提示词失败,{error}": "重置提示词失败,{error}", "重置提示词失败,{error}": "Reset prompts failed, {error}",
"重置提示词成功": "Reset prompts successfully", "重置提示词成功": "Reset prompts successfully",
"该操作会选择 TXT 文件进行导入提示词,\n\n提示词文件格式要求\n每行一个提示词顺序和当前分镜顺序一致\n如果某个分镜不需要导入提示词可以留空该行\n超出分镜的提示词会被删除不足则只导入文本中有的提示词数据\n\n是否继续": "This operation will select a TXT file to import prompts,\n\nPrompt file format requirements:\nOne prompt per line, in the same order as the current storyboard,\nIf a storyboard does not need a prompt, leave that line empty,\nPrompts exceeding the number of storyboards will be deleted, insufficient prompts will only import the available prompt data\n\nDo you want to continue?",
"导入提示词失败,{error}": "Import prompts failed, {error}",
"导入提示词成功": "Import prompts successfully",
"导入的提示词文件内容为空": "Imported prompt file content is empty",
'导入第 {line} 行提示词失败,{error}': 'Failed to import prompt on line {line}, {error}',
"正在批量应用当前设置...": "Batch applying current settings...",
"将当前转视频的基础设置批量应用到所有的分镜中": "Batch apply current video conversion basic settings to all storyboards",
"应用设置": "Apply Settings",
//#endregion //#endregion
//#region 出图 //#region 出图
@ -978,6 +988,7 @@ export default {
"AI处理文案成功": "AI content processing successful", "AI处理文案成功": "AI content processing successful",
"AI处理文案失败{error}": "AI content processing failed, {error}", "AI处理文案失败{error}": "AI content processing failed, {error}",
"检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?": 'System language has been modified, continuing will refresh the current page and apply the current language, continue?', "检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?": 'System language has been modified, continuing will refresh the current page and apply the current language, continue?',
"当前API提供商数据不存在请检查数据是否正确": "Current API provider data does not exist, please check if the data is correct",
//#endregion //#endregion
//#region 预设 //#region 预设
@ -1183,6 +1194,8 @@ export default {
'场景分析': 'Scene Analysis', '场景分析': 'Scene Analysis',
'推理所有提示词': 'Infer All Prompts', '推理所有提示词': 'Infer All Prompts',
'推理空白分镜提示词': 'Infer Blank Storyboard Prompts', '推理空白分镜提示词': 'Infer Blank Storyboard Prompts',
"重置所有提示词": "Reset All Prompts",
"导入提示词": "Import Prompts",
'生成所有图片': 'Generate All Images', '生成所有图片': 'Generate All Images',
'生成未生成图片分镜': 'Generate Un-generated Image Storyboards', '生成未生成图片分镜': 'Generate Un-generated Image Storyboards',
'生成失败图片分镜': 'Generate Failed Image Storyboards', '生成失败图片分镜': 'Generate Failed Image Storyboards',
@ -1545,6 +1558,10 @@ export default {
'7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中': '7. When you need to manually select the required {type} data externally, please click the <strong>"{button}"</strong> button to import it into the tag set', '7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中': '7. When you need to manually select the required {type} data externally, please click the <strong>"{button}"</strong> button to import it into the tag set',
'即将开始自动推理,该操作会将之前的 {type} 数据覆盖,是否继续?': 'About to start automatic inference. This operation will overwrite previous {type} data. Continue?', '即将开始自动推理,该操作会将之前的 {type} 数据覆盖,是否继续?': 'About to start automatic inference. This operation will overwrite previous {type} data. Continue?',
'正在推理,请稍等...': 'Inferring, please wait...', '正在推理,请稍等...': 'Inferring, please wait...',
"通用前/后缀": "General Prefix/Suffix",
"提示词前后缀设置": "Prompt Prefix/Suffix Settings",
"通用前缀": "General Prefix",
"通用后缀": "General Suffix",
//#endregion //#endregion
//#region 转视频 //#region 转视频
@ -1602,6 +1619,51 @@ export default {
"当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?": "Current batch task has enabled image-to-video conversion, do you want to jump directly to the image-to-video interface?", "当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?": "Current batch task has enabled image-to-video conversion, do you want to jump directly to the image-to-video interface?",
"正在跳转到图文转视频界面...": "Jumping to image-to-video interface...", "正在跳转到图文转视频界面...": "Jumping to image-to-video interface...",
"已取消跳转,你可以在转视频模块中查看该任务": "Jump cancelled, you can view the task in the video conversion module", "已取消跳转,你可以在转视频模块中查看该任务": "Jump cancelled, you can view the task in the video conversion module",
'是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置视频类型SD/HD运动变化Motion批次数据Batch视频原始Raw首尾循环Loop \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续': 'Do you want to apply the current storyboard settings to all other storyboards in batch?\n\nSynchronized settings: Video Type (SD/HD), Motion, Batch Data, Raw Video, Loop\n\nAfter batch application, the above basic settings of other storyboards will be replaced with the current storyboard data. Continue?',
"批量应用当前设置失败,{error}": "Failed to batch apply current settings, {error}",
"批量应用当前设置成功!": "Successfully batch applied current settings!",
"配置验证失败": "Configuration validation failed",
"可灵图转视频任务失败,失败信息:{error}": "Kling image-to-video task failed, error details: {error}",
"Kling图转视频任务完成": "Kling image-to-video task completed!",
"未找到有效的API地址": "No valid API address found",
"请先配置AI推理的API密钥": "Please configure the API key for AI inference first",
"当前分镜数据的可灵图转视频参数为空或参数校验失败,请检查": "The Kling image-to-video parameters for the current storyboard data are empty or validation failed, please check",
"当前分镜数据的图片地址为空,请检查": "The image URL for the current storyboard data is empty, please check",
"视频ID数量与视频链接数量不匹配": "The number of video IDs does not match the number of video links",
"未找到有效的GPT API地址": "No valid GPT API address found",
"未知的视频生成方式,请检查": "Unknown video generation method, please check",
"当前分镜数据的可灵视频ID为空请检查": 'The Kling video ID for the current storyboard data is empty, please check',
"当前分镜数据的可灵任务ID为空请检查": "The Kling task ID for the current storyboard data is empty, please check",
"可灵视频延长任务完成!": "Kling video extension task completed!",
"可灵视频延长任务失败,失败信息:{error}": "Kling video extension task failed, error details: {error}",
"可灵图转视频任务执行中...": "Kling image-to-video task in progress...",
"可灵视频延长任务正在执行中...": "Kling video extension task in progress...",
"参考图像": "Reference Image",
"正向文本提示可选不能超过2500个字符": "Positive text prompt, optional, cannot exceed 2500 characters",
"负向文本提示可选不能超过2500个字符": "Negative text prompt, optional, cannot exceed 2500 characters",
"自由度": "Degree of Freedom",
'0-1之间': "Between 0-1",
"生成视频的自由度<br/><br/><strong>值越大,模型自由度越小</strong><br/>与用户输入的提示词相关性越强<br/><br/>取值范围:<strong>[0, 1]</strong>": "Degree of freedom for video generation<br/><br/><strong>The larger the value, the smaller the model's degree of freedom</strong><br/>The stronger the correlation with user input prompts<br/><br/>Range: <strong>[0, 1]</strong>",
"生成模式": "Generation Mode",
"生成视频的模式<br/><br/>枚举值stdpro<br/>其中std标准模式标准基础模式性价比高<br/>其中pro专家模式高品质高表现模式生成视频质量更佳": "Video generation mode<br/><br/>Enum values: std, pro<br/>Where std: Standard mode (standard), basic mode, cost-effective<br/>Where pro: Expert mode (high quality), high-performance mode, better video quality",
"视频时长": "Video Duration",
'生成视频时长单位s5秒或10秒': 'Video duration in seconds (5 or 10 seconds)',
"视频延长": "Video Extend",
"选择已有的视频任务作为延长的基础": "Select an existing video task as the basis for extension",
"视频ID": "Video ID",
"支持通过文本、图片和视频延长生成的视频的ID原视频不能超过3分钟": "Supports the ID of videos generated through text, image, and video extension (original video cannot exceed 3 minutes)",
"对应于视频ID的任务ID通常在选择视频后自动填充": "Corresponding task ID for the video ID, usually auto-filled after selecting a video",
"是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置模型名称Model Name生成模式Mode视频时长Duration自由度CFG Scale \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续": "Do you want to apply the current storyboard settings to all other storyboards in batch?\n\nSynchronized settings: Model Name, Mode, Duration, CFG Scale\n\nAfter batch application, the above basic settings of other storyboards will be replaced with the current storyboard data. Continue?",
"请选择一个已有的视频任务作为延长的基础": "Please select an existing video task as the basis for extension",
"父任务选择成功视频ID已更新为: {videoId}": "Parent task selected successfully, video ID updated to: {videoId}",
"高性能 (std)": "High Performance (std)",
"高表现 (pro)": "High Performance (pro)",
"选择Video": "Select Video",
"<strong>必须</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间": "<strong>必须</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间",
"参考图像 - 尾帧控制": "参考图像 - 尾帧控制",
'<strong>可选</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间': '<strong>可选</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间',
//#endregion
//#region MJ //#region MJ
'基本信息': 'Basic Information', '基本信息': 'Basic Information',
@ -1721,7 +1783,6 @@ export default {
'视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长': 'Video generation duration is 5 seconds, but not limited to this. After video production is completed, you can extend the selected adaptation in the current interface!', '视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长': 'Video generation duration is 5 seconds, but not limited to this. After video production is completed, you can extend the selected adaptation in the current interface!',
'您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。': 'You can freely extend the video up to 4 times, each extension adding 4 seconds, until reaching 21 seconds (the maximum available length).', '您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。': 'You can freely extend the video up to 4 times, each extension adding 4 seconds, until reaching 21 seconds (the maximum available length).',
//#endregion
//#endregion //#endregion
//#region 文案处理 //#region 文案处理

View File

@ -58,6 +58,7 @@ export default {
"未知错误": '未知错误', "未知错误": '未知错误',
"未知类型": '未知类型', "未知类型": '未知类型',
"未知操作": '未知操作', "未知操作": '未知操作',
"未知状态": "未知状态",
"下载成功": '下载成功', "下载成功": '下载成功',
"下载失败": '下载失败', "下载失败": '下载失败',
"页面不存在": "页面不存在", "页面不存在": "页面不存在",
@ -244,9 +245,10 @@ export default {
"当前分镜数据的MJ图转视频参数为空或参数校验失败请检查": "当前分镜数据的MJ图转视频参数为空或参数校验失败请检查", "当前分镜数据的MJ图转视频参数为空或参数校验失败请检查": "当前分镜数据的MJ图转视频参数为空或参数校验失败请检查",
'当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试': '当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试', '当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试': '当前Midjourney模式不支持视频生成功能请更换为MJ API或本地代理模式后重试',
'Midjourney图转视频任务执行失败失败信息如下{error}': 'Midjourney图转视频任务执行失败失败信息如下{error}', 'Midjourney图转视频任务执行失败失败信息如下{error}': 'Midjourney图转视频任务执行失败失败信息如下{error}',
"图转视频任务执行完成!": "图转视频任务执行完成!",
'Midjourney图转视频任务执行完成。': 'Midjourney图转视频任务执行完成。', 'Midjourney图转视频任务执行完成。': 'Midjourney图转视频任务执行完成。',
'Midjourney图转视频任务执行中...': 'Midjourney图转视频任务执行中...', 'Midjourney图转视频任务执行中...': 'Midjourney图转视频任务执行中...',
'已成功提交Midjourney图转视频任务任务ID{taskId}': '已成功提交Midjourney图转视频任务任务ID{taskId}', '已成功提交{type}图转视频任务任务ID{taskId}': '已成功提交{type}图转视频任务任务ID{taskId}',
"小说批次任务的分镜数据的转视频配置为空,请检查": "小说批次任务的分镜数据的转视频配置为空,请检查", "小说批次任务的分镜数据的转视频配置为空,请检查": "小说批次任务的分镜数据的转视频配置为空,请检查",
"分镜的图片没有全部出完,不能继续该操作!!": "分镜的图片没有全部出完,不能继续该操作!!", "分镜的图片没有全部出完,不能继续该操作!!": "分镜的图片没有全部出完,不能继续该操作!!",
"分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确": "分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确", "分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确": "分镜 {name} 图片在本地未找到,不能继续该操作,请检查对应分镜的图片路径是否正确",
@ -313,6 +315,14 @@ export default {
'正在执行重置提示词任务,请稍等...': '正在执行重置提示词任务,请稍等...', '正在执行重置提示词任务,请稍等...': '正在执行重置提示词任务,请稍等...',
"重置提示词失败,{error}": "重置提示词失败,{error}", "重置提示词失败,{error}": "重置提示词失败,{error}",
"重置提示词成功": "重置提示词成功", "重置提示词成功": "重置提示词成功",
"该操作会选择 TXT 文件进行导入提示词,\n\n提示词文件格式要求\n每行一个提示词顺序和当前分镜顺序一致\n如果某个分镜不需要导入提示词可以留空该行\n超出分镜的提示词会被删除不足则只导入文本中有的提示词数据\n\n是否继续": "该操作会选择 TXT 文件进行导入提示词,\n\n提示词文件格式要求\n每行一个提示词顺序和当前分镜顺序一致\n如果某个分镜不需要导入提示词可以留空该行\n超出分镜的提示词会被删除不足则只导入文本中有的提示词数据\n\n是否继续",
"导入提示词失败,{error}": "导入提示词失败,{error}",
"导入提示词成功": "导入提示词成功",
"导入的提示词文件内容为空": "导入的提示词文件内容为空",
'导入第 {line} 行提示词失败,{error}': '导入第 {line} 行提示词失败,{error}',
"正在批量应用当前设置...": "正在批量应用当前设置...",
"将当前转视频的基础设置批量应用到所有的分镜中": "将当前转视频的基础设置批量应用到所有的分镜中",
"应用设置": "应用设置",
//#endregion //#endregion
//#region 出图 //#region 出图
@ -978,6 +988,7 @@ export default {
"AI处理文案成功": "AI处理文案成功", "AI处理文案成功": "AI处理文案成功",
"AI处理文案失败{error}": "AI处理文案失败{error}", "AI处理文案失败{error}": "AI处理文案失败{error}",
"检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?": "检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?", "检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?": "检测系统当前的语言已修改,继续执行会刷新当前页面并应用当前语言,是否继续?",
"当前API提供商数据不存在请检查数据是否正确": "当前API提供商数据不存在请检查数据是否正确",
//#endregion //#endregion
//#region 预设 //#region 预设
@ -1183,6 +1194,8 @@ export default {
'场景分析': '场景分析', '场景分析': '场景分析',
'推理所有提示词': '推理所有提示词', '推理所有提示词': '推理所有提示词',
'推理空白分镜提示词': '推理空白分镜提示词', '推理空白分镜提示词': '推理空白分镜提示词',
"重置所有提示词": "重置所有提示词",
"导入提示词": "导入提示词",
'生成所有图片': '生成所有图片', '生成所有图片': '生成所有图片',
'生成未生成图片分镜': '生成未生成图片分镜', '生成未生成图片分镜': '生成未生成图片分镜',
'生成失败图片分镜': '生成失败图片分镜', '生成失败图片分镜': '生成失败图片分镜',
@ -1545,6 +1558,10 @@ export default {
'7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中': '7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中', '7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中': '7. 需要在外部手动选择需要的{type}数据时,请点击<strong>“{button}”</strong> 按钮进行导入到标签集中',
'即将开始自动推理,该操作会将之前的 {type} 数据覆盖,是否继续?': '即将开始自动推理,该操作会将之前的场景数据覆盖,是否继续?', '即将开始自动推理,该操作会将之前的 {type} 数据覆盖,是否继续?': '即将开始自动推理,该操作会将之前的场景数据覆盖,是否继续?',
'正在推理,请稍等...': '正在推理,请稍等...', '正在推理,请稍等...': '正在推理,请稍等...',
"通用前/后缀" : "通用前/后缀",
"提示词前后缀设置" : "提示词前后缀设置",
"通用前缀" : "通用前缀",
"通用后缀" : "通用后缀",
//#endregion //#endregion
//#region 转视频 //#region 转视频
@ -1602,6 +1619,51 @@ export default {
"当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?": "当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?", "当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?": "当前批次任务已经开启图转视频,是否直接跳转到图文转视频界面?",
"正在跳转到图文转视频界面...": "正在跳转到图文转视频界面...", "正在跳转到图文转视频界面...": "正在跳转到图文转视频界面...",
"已取消跳转,你可以在转视频模块中查看该任务": "已取消跳转,你可以在转视频模块中查看该任务", "已取消跳转,你可以在转视频模块中查看该任务": "已取消跳转,你可以在转视频模块中查看该任务",
'是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置视频类型SD/HD运动变化Motion批次数据Batch视频原始Raw首尾循环Loop \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续': '是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置视频类型SD/HD运动变化Motion批次数据Batch视频原始Raw首尾循环Loop \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续',
"批量应用当前设置失败,{error}": "批量应用当前设置失败,{error}",
"批量应用当前设置成功!": "批量应用当前设置成功!",
"配置验证失败": "配置验证失败",
"可灵图转视频任务失败,失败信息:{error}": "可灵图转视频任务失败,失败信息:{error}",
"Kling图转视频任务完成": "Kling图转视频任务完成",
"未找到有效的API地址": "未找到有效的API地址",
"请先配置AI推理的API密钥": "请先配置AI推理的API密钥",
"当前分镜数据的可灵图转视频参数为空或参数校验失败,请检查": "当前分镜数据的可灵图转视频参数为空或参数校验失败,请检查",
"当前分镜数据的图片地址为空,请检查": "当前分镜数据的图片地址为空,请检查",
"视频ID数量与视频链接数量不匹配": "视频ID数量与视频链接数量不匹配",
"未找到有效的GPT API地址": "未找到有效的GPT API地址",
"未知的视频生成方式,请检查": "未知的视频生成方式,请检查",
"当前分镜数据的可灵视频ID为空请检查": '当前分镜数据的可灵视频ID为空请检查',
"当前分镜数据的可灵任务ID为空请检查": "当前分镜数据的可灵任务ID为空请检查",
"可灵视频延长任务完成!": "可灵视频延长任务完成!",
"可灵视频延长任务失败,失败信息:{error}": "可灵视频延长任务失败,失败信息:{error}",
"可灵图转视频任务执行中...": "可灵图转视频任务执行中...",
"可灵视频延长任务正在执行中...": "可灵视频延长任务正在执行中...",
"参考图像": "参考图像",
"正向文本提示可选不能超过2500个字符": "正向文本提示可选不能超过2500个字符",
"负向文本提示可选不能超过2500个字符": "负向文本提示可选不能超过2500个字符",
"自由度": "自由度",
'0-1之间': "0-1之间",
"生成视频的自由度<br/><br/><strong>值越大,模型自由度越小</strong><br/>与用户输入的提示词相关性越强<br/><br/>取值范围:<strong>[0, 1]</strong>": "生成视频的自由度<br/><br/><strong>值越大,模型自由度越小</strong><br/>与用户输入的提示词相关性越强<br/><br/>取值范围:<strong>[0, 1]</strong>",
"生成模式": "生成模式",
"生成视频的模式<br/><br/>枚举值stdpro<br/>其中std标准模式标准基础模式性价比高<br/>其中pro专家模式高品质高表现模式生成视频质量更佳": "生成视频的模式<br/><br/>枚举值stdpro<br/>其中std标准模式标准基础模式性价比高<br/>其中pro专家模式高品质高表现模式生成视频质量更佳",
"视频时长": "视频时长",
'生成视频时长单位s5秒或10秒': '生成视频时长单位s5秒或10秒',
"视频延长": "视频延长",
"选择已有的视频任务作为延长的基础": "选择已有的视频任务作为延长的基础",
"视频ID": "视频ID",
"支持通过文本、图片和视频延长生成的视频的ID原视频不能超过3分钟": "支持通过文本、图片和视频延长生成的视频的ID原视频不能超过3分钟",
"对应于视频ID的任务ID通常在选择视频后自动填充": "对应于视频ID的任务ID通常在选择视频后自动填充",
"是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置模型名称Model Name生成模式Mode视频时长Duration自由度CFG Scale \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续": "是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置模型名称Model Name生成模式Mode视频时长Duration自由度CFG Scale \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续",
"请选择一个已有的视频任务作为延长的基础": "请选择一个已有的视频任务作为延长的基础",
"父任务选择成功视频ID已更新为: {videoId}" : "父任务选择成功视频ID已更新为: {videoId}",
"高性能 (std)" : "高性能 (std)",
"高表现 (pro)" : "高表现 (pro)",
"选择Video" : "选择Video",
"<strong>必须</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间": "<strong>必须</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间",
"参考图像 - 尾帧控制": "参考图像 - 尾帧控制",
'<strong>可选</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间': '<strong>可选</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间',
//#endregion
//#region MJ //#region MJ
'基本信息': '基本信息', '基本信息': '基本信息',
@ -1721,7 +1783,6 @@ export default {
'视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长': '视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长', '视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长': '视频生成时长为5秒但并非仅限于此。视频制作完成后您可以在当前界面为选定的适配进行延长',
'您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。': '您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。', '您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。': '您可以随意将视频延长最多 4 次,每次延长 4 秒,直至达到 21 秒(即可用的最大长度)。',
//#endregion
//#endregion //#endregion
//#region 文案处理 //#region 文案处理
@ -1730,7 +1791,7 @@ export default {
'【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)': '【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)', '【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)': '【LaiTool】分镜大师-通用版(上下文-人物场景固定-类型推理)',
'【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)': '【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)', '【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)': '【LaiTool】分镜大师-全面版-AI增强上下文-人物场景固定-单帧)',
'【LaiTool】分镜大师-全能优化版(上下文-人物固定)': '【LaiTool】分镜大师-全能优化版(上下文-人物固定)', '【LaiTool】分镜大师-全能优化版(上下文-人物固定)': '【LaiTool】分镜大师-全能优化版(上下文-人物固定)',
"【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)" : "【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)", "【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)": "【LaiTool】分镜大师-MJ超精细化版上下文-人物场景固定)",
'【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)': '【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)', '【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)': '【LaiTool】分镜大师-SD英文版上下文-人物场景固定-SD-英文提示词)',
'【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)': '【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)', '【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)': '【LaiTool】分镜大师-单帧分镜提示词(上下文-单帧-人物自动推理)',
"没有找到对应的AI选项请先检查配置": "没有找到对应的AI选项请先检查配置", "没有找到对应的AI选项请先检查配置": "没有找到对应的AI选项请先检查配置",

View File

@ -10,6 +10,7 @@ import { RetryWithBackoff } from '@/define/Tools/common'
import { Book } from '@/define/model/book/book' import { Book } from '@/define/model/book/book'
import { AiInferenceModelModel, GetAIPromptOptionByValue } from '@/define/data/aiData/aiData' import { AiInferenceModelModel, GetAIPromptOptionByValue } from '@/define/data/aiData/aiData'
import { t } from '@/i18n' import { t } from '@/i18n'
import { GetAIPromptRequestBodyByValue } from '@/define/data/aiData/aiPrompt/bookStoryboardPrompt'
/** /**
* AI推理通用工具类 * AI推理通用工具类
@ -278,7 +279,15 @@ export class AiReasonCommon {
throw new Error(t('当前模式需要提前分析或者设置角色场景数据,请先分析角色/场景数据!')) throw new Error(t('当前模式需要提前分析或者设置角色场景数据,请先分析角色/场景数据!'))
} }
let requestBody = cloneDeep(selectInferenceModel.requestBody) let requestBody: OpenAIRequest.Request | null = null
if (typeof selectInferenceModel.requestBody == 'string') {
requestBody = cloneDeep(GetAIPromptRequestBodyByValue(selectInferenceModel.requestBody as string))
} else {
requestBody = cloneDeep(selectInferenceModel.requestBody as OpenAIRequest.Request)
}
// 通过 requestBody 获取实际的 requestBody
if (requestBody == null) { if (requestBody == null) {
throw new Error(t('未找到对应的分镜预设的请求数据,请检查')) throw new Error(t('未找到对应的分镜预设的请求数据,请检查'))
} }

View File

@ -3,6 +3,18 @@ import { BookTaskService } from '@/define/db/service/book/bookTaskService'
import { OptionRealmService } from '@/define/db/service/optionService' import { OptionRealmService } from '@/define/db/service/optionService'
import { BookService } from '@/define/db/service/book/bookService' import { BookService } from '@/define/db/service/book/bookService'
import { TaskListService } from '@/define/db/service/book/taskListService' import { TaskListService } from '@/define/db/service/book/taskListService'
import { TaskModal } from '@/define/model/task'
import { Book } from '@/define/model/book/book'
import { getProjectPath } from '../../option/optionCommonService'
import path from 'path'
import { isEmpty } from 'lodash'
import axios from 'axios'
import { define } from '@/define/define'
import { CheckFolderExistsOrCreate, CopyFileOrFolder } from '@/define/Tools/file'
import { DownloadFile } from '@/define/Tools/common'
import { MappingTaskTypeToVideoModel } from '@/define/enum/video'
import { BookBackTaskType } from '@/define/enum/bookEnum'
import { t } from '@/i18n'
export class BookBasicHandle { export class BookBasicHandle {
bookTaskDetailService!: BookTaskDetailService bookTaskDetailService!: BookTaskDetailService
@ -34,10 +46,164 @@ export class BookBasicHandle {
} }
} }
/**
*
* @returns
*/
CheckInit() {
if (this.bookTaskDetailService
&& this.bookTaskService
&& this.optionRealmService
&& this.bookService
&& this.taskListService) {
return true
}
return false
}
/** 执行事务的方法 */
async transaction(callback: (realm: any) => void) { async transaction(callback: (realm: any) => void) {
await this.InitBookBasicHandle() this.CheckInit() || await this.InitBookBasicHandle()
this.bookService.transaction(() => { this.bookService.transaction(() => {
callback(this.bookService.realm) callback(this.bookService.realm)
}) })
} }
/**
*
*
* URL下载视频文件到本地
* MidJourney
* MJ官方CDN和可灵视频外
*
* @param {string[]} videoUrls - URL列表
* @param {TaskModal.Task} task -
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {string} preffix -
*
* @returns {Promise<{outVideoPath: string, subVideoPath: string[]}>}
* - outVideoPath: 主输出视频的本地路径
* - subVideoPath: 所有子视频的路径信息数组JSON字符串格式
*
* @throws {Error}
* @throws {Error}
* @throws {Error}
*
* @example
* ```typescript
* const result = await this.DownloadVideoUrls(
* ['http://example.com/video1.mp4', 'http://example.com/video2.mp4'],
* task,
* bookTaskDetail,
* 'MJ'
* );
* console.log('主视频路径:', result.outVideoPath);
* console.log('所有视频路径:', result.subVideoPath);
* ```
*
* @description
*
* 1.
* 2. URL进行下载
* 3. 使
* 4.
* 5.
* 6.
*
* @note
* - MidJourney官方CDN (cdn.midjourney.com)
* - (KLING_VIDEO, KLING_VIDEO_EXTEND) 使
* - machineId配置
* -
*/
async DownloadVideoUrls(videoUrls: string[], task: TaskModal.Task, bookTaskDetail: Book.SelectBookTaskDetail, preffix: string, videoIds?: string[]): Promise<{ outVideoPath: string, subVideoPath: string[] }> {
this.CheckInit() || await this.InitBookBasicHandle()
if (videoIds != undefined && videoIds.length != videoUrls.length) {
throw new Error(t("视频ID数量与视频链接数量不匹配"))
}
let bookTask = await this.bookTaskService.GetBookTaskDataById(
bookTaskDetail.bookTaskId as string,
true
)
let tempVideoUrls = bookTaskDetail.subVideoPath || []
let newVideoUrls: string[] = []
let outVideoPath: string = ''
const project_path = await getProjectPath()
// 开始下载所有视频
for (let i = 0; i < videoUrls.length; i++) {
const videoUrl = videoUrls[i]
// 处理文件地址和下载
let videoPath = path.join(
bookTask.imageFolder as string,
`video/subVideo/${bookTaskDetail.name}/${new Date().getTime()}_${i}.mp4`
)
let remoteUrl = videoUrl
// 开始处理下载 mj 官方的图片不支持转存
if (global.machineId
&& !isEmpty(global.machineId)
&& !videoUrl.startsWith('https://cdn.midjourney.com')
&& task.type != BookBackTaskType.KLING_VIDEO
&& task.type != BookBackTaskType.KLING_VIDEO_EXTEND
) {
// 转存一下视频文件
// 获取当前url的文件名
let fileName = preffix + "_" + path.basename(videoUrl)
let transferRes = await axios.post(define.lms_url + `/lms/FileUpload/UrlUpload/${global.machineId}`, {
url: videoUrl,
fileName: fileName
})
if (transferRes.status == 200 && transferRes.data.code == 1) {
remoteUrl = transferRes.data.data.url
}
}
if (isEmpty(remoteUrl)) {
remoteUrl = videoUrl
}
await CheckFolderExistsOrCreate(path.dirname(videoPath))
await DownloadFile(remoteUrl, videoPath)
// 处理返回数据信息
// 开始修改信息
// 将信息添加到里面
let a = {
localPath: path.relative(project_path, videoPath),
remotePath: remoteUrl,
taskId: bookTaskDetail.videoMessage?.taskId,
videoId: videoIds != undefined && videoIds[i] ? videoIds[i] : "",
index: i,
type: MappingTaskTypeToVideoModel(task.type as string)
}
newVideoUrls.push(JSON.stringify(a))
if (i == 0) {
outVideoPath = path.join(
bookTask.imageFolder as string,
'video',
bookTaskDetail.name + path.extname(videoPath)
)
await CopyFileOrFolder(videoPath, outVideoPath as string)
}
}
// 开始处理数据
// 将原有的视频路径合并到新数组中
newVideoUrls.push(...tempVideoUrls)
await this.bookTaskDetailService.ModifyBookTaskDetailById(bookTaskDetail.id as string, {
subVideoPath: newVideoUrls,
generateVideoPath: outVideoPath != '' ? outVideoPath : ''
})
return {
outVideoPath: outVideoPath,
subVideoPath: newVideoUrls
}
}
} }

View File

@ -3,7 +3,9 @@ import { BookBasicHandle } from './bookBasicHandle'
import { Book } from '@/define/model/book/book' import { Book } from '@/define/model/book/book'
import { import {
ImageToVideoModels, ImageToVideoModels,
KlingDuration,
KlingMode, KlingMode,
KlingModelName,
MJVideoBatchSize, MJVideoBatchSize,
MJVideoMotion, MJVideoMotion,
MJVideoType, MJVideoType,
@ -23,6 +25,7 @@ import { getProjectPath } from '../../option/optionCommonService'
import { TaskModal } from '@/define/model/task' import { TaskModal } from '@/define/model/task'
import { BookBackTaskStatus, BookBackTaskType, BookTaskStatus } from '@/define/enum/bookEnum' import { BookBackTaskStatus, BookBackTaskType, BookTaskStatus } from '@/define/enum/bookEnum'
import { VideoHandle } from '@/main/service/video/index' import { VideoHandle } from '@/main/service/video/index'
import { ResponseMessageType } from '@/define/enum/softwareEnum'
export class BookVideoServiceHandle extends BookBasicHandle { export class BookVideoServiceHandle extends BookBasicHandle {
constructor() { constructor() {
@ -190,7 +193,7 @@ export class BookVideoServiceHandle extends BookBasicHandle {
) )
let gptUrl = GetApiDefineDataById(inferenceSetting.apiProvider)?.gpt_url let gptUrl = GetApiDefineDataById(inferenceSetting.apiProvider)?.gpt_url
if (gptUrl == null || isEmpty(gptUrl)) { if (gptUrl == null || isEmpty(gptUrl)) {
throw new Error() throw new Error(t('未找到有效的GPT API地址'))
} }
// 开始设置默认设置 // 开始设置默认设置
@ -219,13 +222,15 @@ export class BookVideoServiceHandle extends BookBasicHandle {
} }
let klingOptions: BookTaskDetail.klingOptions = { let klingOptions: BookTaskDetail.klingOptions = {
model: 'kling-v1', model_name: KlingModelName.KLING_V2_1,
image: outImage, image: outImage,
image_tail: '', image_tail: '',
prompt: '', prompt: '',
negative_prompt: '', negative_prompt: '',
mode: KlingMode.STD, mode: KlingMode.STD,
duration: RunwaySeconds.FIVE duration: KlingDuration.FIVE,
video_id: '',
cfg_scale: 0.5
} }
let mjVideoOptions: BookTaskDetail.MjVideoOptions = { let mjVideoOptions: BookTaskDetail.MjVideoOptions = {
@ -305,14 +310,33 @@ export class BookVideoServiceHandle extends BookBasicHandle {
try { try {
// 更具不同的方式调用不同的处理方法 // 更具不同的方式调用不同的处理方法
const videoHandle = new VideoHandle() const videoHandle = new VideoHandle()
let res;
switch (task.type) { switch (task.type) {
case BookBackTaskType.MJ_VIDEO: case BookBackTaskType.MJ_VIDEO:
return await videoHandle.MJImageToVideo(task) res = await videoHandle.MJImageToVideo(task)
break
case BookBackTaskType.MJ_VIDEO_EXTEND: case BookBackTaskType.MJ_VIDEO_EXTEND:
return await videoHandle.MJVideoExtendToVideo(task) res = await videoHandle.MJVideoExtendToVideo(task)
break
case BookBackTaskType.KLING_VIDEO:
res = await videoHandle.KlingImageToVideo(task)
break
case BookBackTaskType.KLING_VIDEO_EXTEND :
res = await videoHandle.KlingVideoExtend(task)
break
default: default:
throw new Error('未知的视频生成方式,请检查') throw new Error(t('未知的视频生成方式,请检查'))
} }
let newValue = await this.bookTaskDetailService.GetBookTaskDetailDataById(task.bookTaskDetailId as string, true)
SendReturnMessage(
{
code: 1,
id: task.bookTaskDetailId as string,
message: t('图转视频任务执行完成。'),
type: ResponseMessageType.VIDEO_SUCESS,
data: JSON.stringify(newValue)
}, task.messageName as string)
return res;
} catch (error) { } catch (error) {
// 统一处理 报错信息 // 统一处理 报错信息
let message = t("图转视频失败,失败信息如下:{error}", { error: (error as Error).message }) let message = t("图转视频失败,失败信息如下:{error}", { error: (error as Error).message })

View File

@ -3,6 +3,7 @@ import { OptionKeyName } from '@/define/enum/option'
import { optionSerialization } from './optionSerialization' import { optionSerialization } from './optionSerialization'
import { SettingModal } from '@/define/model/setting' import { SettingModal } from '@/define/model/setting'
import { t } from '@/i18n' import { t } from '@/i18n'
import { GetApiDefineDataById } from '@/define/data/apiData'
/** /**
* *
@ -33,3 +34,40 @@ export async function getGeneralSetting() {
) as SettingModal.GeneralSettings ) as SettingModal.GeneralSettings
return generalSetting return generalSetting
} }
/**
* AI推理设置API提供商信息
*
* AI推理相关的配置设置API提供商详细信息
* API提供商配置便使
*
* @returns {Promise<SettingModal.InferenceAISettingAndProvider>} API提供商信息的完整配置对象
* @throws {Error} API提供商ID在系统中不存在时抛出错误
*
* @example
* ```typescript
* const inferenceConfig = await getInferenceSetting();
* console.log(inferenceConfig.inferenceModel); // AI推理模型名称
* console.log(inferenceConfig.apiProviderItem.name); // API提供商名称
* ```
*/
export async function getInferenceSetting(): Promise<SettingModal.InferenceAISettingAndProvider> {
const optionRealmService = await OptionRealmService.getInstance()
let res = optionRealmService.GetOptionByKey(OptionKeyName.InferenceAI.InferenceSetting)
let aiReasonSetting = optionSerialization<SettingModal.InferenceAISettings>(
res,
t('设置 -> 推理设置')
)
// 获取对应的provider
let apiProviderItem = GetApiDefineDataById(aiReasonSetting.apiProvider);
if (apiProviderItem == null) {
throw new Error(t('当前API提供商数据不存在请检查数据是否正确'))
}
let result = {
...aiReasonSetting,
apiProviderItem
}
return result
}

View File

@ -436,7 +436,8 @@ export class TaskManager {
// case BookBackTaskType.RUNWAY_VIDEO: // case BookBackTaskType.RUNWAY_VIDEO:
// case BookBackTaskType.LUMA_VIDEO: // case BookBackTaskType.LUMA_VIDEO:
// case BookBackTaskType.KLING_VIDEO: case BookBackTaskType.KLING_VIDEO:
case BookBackTaskType.KLING_VIDEO_EXTEND:
case BookBackTaskType.MJ_VIDEO: case BookBackTaskType.MJ_VIDEO:
case BookBackTaskType.MJ_VIDEO_EXTEND: case BookBackTaskType.MJ_VIDEO_EXTEND:
this.AddImageToVideo(task) this.AddImageToVideo(task)

View File

@ -50,7 +50,7 @@ export class TranslateCommon {
let apiProvider = GetApiDefineDataById(aiSetting.apiProvider) let apiProvider = GetApiDefineDataById(aiSetting.apiProvider)
if (apiProvider.gpt_url == null || isEmpty(apiProvider.gpt_url)) { if (apiProvider.gpt_url == null || isEmpty(apiProvider.gpt_url)) {
throw new Error('未找到有效的GPT API地址') throw new Error(t('未找到有效的GPT API地址'))
} }
this.translationBusiness = apiProvider.gpt_url this.translationBusiness = apiProvider.gpt_url
this.translationAppId = aiSetting.translationModel this.translationAppId = aiSetting.translationModel

View File

@ -1,11 +1,14 @@
import { TaskModal } from '@/define/model/task' import { TaskModal } from '@/define/model/task'
import { MJVideoService } from './mjVideo' import { MJVideoService } from './mjVideo'
import { KlingVideoService } from './klingVideo'
export class VideoHandle { export class VideoHandle {
mjVideoService: MJVideoService mjVideoService: MJVideoService
klingVideoService: KlingVideoService
// 这里可以添加 VideoHandle 特有的方法 // 这里可以添加 VideoHandle 特有的方法
constructor() { constructor() {
// mixin 装饰器会处理初始化 // mixin 装饰器会处理初始化
this.mjVideoService = new MJVideoService() this.mjVideoService = new MJVideoService()
this.klingVideoService = new KlingVideoService()
} }
/** MJ图片转视频处理方法 将指定的图片通过Midjourney API转换为视频 */ /** MJ图片转视频处理方法 将指定的图片通过Midjourney API转换为视频 */
@ -17,4 +20,13 @@ export class VideoHandle {
MJVideoExtendToVideo(task: TaskModal.Task) { MJVideoExtendToVideo(task: TaskModal.Task) {
return this.mjVideoService.MJVideoExtendToVideo(task) return this.mjVideoService.MJVideoExtendToVideo(task)
} }
/** 将静态图片通过可灵AI模型转换为动态视频的核心方法。 支持多种参数配置,包括模型选择、提示词、负面提示词、持续时间等。 */
KlingImageToVideo(task: TaskModal.Task) {
return this.klingVideoService.KlingImageToVideo(task)
}
KlingVideoExtend(task: TaskModal.Task) {
return this.klingVideoService.KlingVideoExtend(task)
}
} }

View File

@ -0,0 +1,748 @@
import { TaskModal } from "@/define/model/task";
import { BookBasicHandle } from "../book/subBookHandle/bookBasicHandle";
import { getInferenceSetting } from "../option/optionCommonService";
import { t } from "@/i18n";
import { cloneDeep, isEmpty } from "lodash";
import { SettingModal } from "@/define/model/setting";
import { ValidateJson } from "@/define/Tools/validate";
import { BookTaskDetail } from "@/define/model/book/bookTaskDetail";
import { KlingDuration, KlingMode, KlingModelName, VideoStatus } from "@/define/enum/video";
import { GetImageBase64 } from "@/define/Tools/image";
import axios from "axios";
import { console } from "inspector";
import { SendReturnMessage, successMessage } from "@/public/generalTools";
import { ResponseMessageType } from "@/define/enum/softwareEnum";
import { GeneralResponse } from "@/define/model/generalResponse";
import { Book } from "@/define/model/book/book";
import { BookBackTaskStatus, BookBackTaskType, BookTaskStatus } from "@/define/enum/bookEnum";
export class KlingVideoService extends BookBasicHandle {
inferenceSetting!: SettingModal.InferenceAISettingAndProvider
constructor() {
super();
}
private async InitApiSetting() {
// 加载推理设置中的数据
const inferenceSetting = await getInferenceSetting();
this.inferenceSetting = inferenceSetting;
// 判断一些数据是不是存在
if (isEmpty(this.inferenceSetting.apiProviderItem.base_url)) {
throw new Error(t('未找到有效的API地址'));
}
if (this.inferenceSetting.apiToken == null || isEmpty(this.inferenceSetting.apiToken)) {
throw new Error(t('请先配置AI推理的API密钥'));
}
}
//#region KlingImageToVideo
/**
*
*
* AI模型转换为动态视频的核心方法
*
*
* @param {TaskModal.Task} task -
* @returns {Promise<any>}
*
* @throws {Error} API地址未配置时
* @throws {Error} API密钥未配置时
* @throws {Error}
* @throws {Error}
* @throws {Error}
*
* @example
* ```typescript
* const klingService = new KlingVideoService();
* await klingService.KlingImageToVideo(task);
* ```
*/
async KlingImageToVideo(task: TaskModal.Task): Promise<GeneralResponse.SuccessItem> {
try {
// 初始化基础句柄和API设置
await this.InitBookBasicHandle();
await this.InitApiSetting();
let { klingOption, videoMessage, bookTaskDetail } = await this.GetKlingOptions(task.bookTaskDetailId as string)
let imageUrl = videoMessage.imageUrl?.trim() || klingOption.image?.trim()
if (isEmpty(imageUrl)) {
throw new Error(t("当前分镜数据的图片地址为空,请检查"))
}
let image_tail = klingOption.image_tail?.trim()
let model_name = klingOption.model_name?.trim() || KlingModelName.KLING_V2_1;
let prompt = klingOption.prompt?.trim();
let negative_prompt = klingOption.negative_prompt?.trim();
let cfg_scale = klingOption.cfg_scale || 0.5;
let mode = klingOption.mode?.trim() || KlingMode.STD;
let duration = klingOption.duration ?? KlingDuration.FIVE;
let body: BookTaskDetail.klingOptions = {
model_name: model_name as KlingModelName,
image: await GetImageBase64(imageUrl as string, true),
}
if (imageUrl.startsWith('http')) {
body.image = imageUrl
} else {
body.image = await GetImageBase64(imageUrl as string, true)
}
if (!isEmpty(image_tail)) {
if (image_tail?.startsWith('http')) {
body.image_tail = image_tail
} else {
body.image_tail = await GetImageBase64(image_tail as string, true)
}
}
if (!isEmpty(prompt)) {
body.prompt = prompt
}
if (!isEmpty(negative_prompt)) {
body.negative_prompt = negative_prompt
}
if (cfg_scale != null) {
body.cfg_scale = cfg_scale
}
if (mode != null && !isEmpty(mode)) {
body.mode = mode as KlingMode
}
if (duration != null) {
body.duration = duration
}
let url = this.inferenceSetting.apiProviderItem.base_url + '/kling/v1/videos/image2video'
let res = await axios.post(url, body, {
headers: {
Authorization: `Bearer ${this.inferenceSetting.apiToken}`,
'Content-Type': 'application/json'
}
})
console.log("Kling video res", res.data)
let resData = res.data
let taskId = resData.data.task_id
// 修改Task, 将数据写入
this.taskListService.UpdateBackTaskData(task.id as string, {
taskId: taskId as string,
taskMessage: JSON.stringify(resData)
})
// 修改videoMessage
videoMessage.taskId = taskId
videoMessage.status = VideoStatus.SUBMITTED
videoMessage.messageData = JSON.stringify(resData)
videoMessage.msg = ''
delete videoMessage.imageUrl // 不要修改原本的图片地址
this.bookTaskDetailService.UpdateBookTaskDetailVideoMessage(
task.bookTaskDetailId as string,
videoMessage
)
// 添加任务成功 返回前端任务事件
SendReturnMessage(
{
code: 1,
id: task.bookTaskDetailId as string,
message: t('已成功提交{type}图转视频任务任务ID{taskId}', { type: t("可灵"), taskId: taskId }),
type: ResponseMessageType.KLING_VIDEO,
data: JSON.stringify(videoMessage)
},
task.messageName as string
)
await this.FecthKlingImageToVideoResult(bookTaskDetail, task, taskId, false)
return successMessage(
t('Kling图转视频任务完成'),
'MJVideoService_MJImageToVideo'
)
} catch (error) {
throw new Error(t('可灵图转视频任务失败,失败信息:{error}', { error: (error as Error).message }));
}
}
//#endregion
//#region KlingVideoExtend
/**
*
*
* AI模型为现有视频添加更多内容
* CFG Scale参数
*
* @param {TaskModal.Task} task -
* @returns {Promise<GeneralResponse.SuccessItem>}
*
* @throws {Error} API地址未配置时
* @throws {Error} API密钥未配置时
* @throws {Error} ID为空时
* @throws {Error} ID为空时
* @throws {Error}
*
* @description
*
* 1. API设置
* 2. IDID等
* 3. video_idtask_id
* 4. API请求体
* 5. API接口
* 6.
* 7.
* 8.
*
* @note
* -
* - video_id和task_id
* -
* -
* -
*
* @example
* ```typescript
* const klingService = new KlingVideoService();
* const result = await klingService.KlingVideoExtend(task);
* console.log('视频延长任务已提交');
* ```
*/
async KlingVideoExtend(task: TaskModal.Task): Promise<GeneralResponse.SuccessItem> {
try {
// 初始化基础句柄和API设置
await this.InitBookBasicHandle();
await this.InitApiSetting();
let { klingOption, videoMessage, bookTaskDetail } = await this.GetKlingOptions(task.bookTaskDetailId as string)
let video_id = klingOption.video_id?.trim()
if (isEmpty(video_id)) {
throw new Error(t("当前分镜数据的可灵视频ID为空请检查"))
}
let task_id = klingOption.task_id?.trim()
if (isEmpty(task_id)) {
throw new Error(t("当前分镜数据的可灵任务ID为空请检查"))
}
let prompt = klingOption.prompt?.trim();
let negative_prompt = klingOption.negative_prompt?.trim();
let cfg_scale = klingOption.cfg_scale || 0.5;
let body: any = {
video_id: video_id as string,
task_id: task_id as string
}
if (!isEmpty(prompt)) {
body.prompt = prompt
}
if (!isEmpty(negative_prompt)) {
body.negative_prompt = negative_prompt
}
if (cfg_scale != null) {
body.cfg_scale = cfg_scale
}
// 开始做请求
let url = this.inferenceSetting.apiProviderItem.base_url + '/kling/v1/videos/video-extend';
let res = await axios.post(url, body, {
headers: {
Authorization: `Bearer ${this.inferenceSetting.apiToken}`,
'Content-Type': 'application/json'
}
})
console.log("Kling video extend res", res.data)
let resData = res.data
let taskId = resData.data.task_id
// 修改Task, 将数据写入
this.taskListService.UpdateBackTaskData(task.id as string, {
taskId: taskId as string,
taskMessage: JSON.stringify(resData)
})
// 修改videoMessage
videoMessage.taskId = taskId
videoMessage.status = VideoStatus.SUBMITTED
videoMessage.messageData = JSON.stringify(resData)
videoMessage.msg = ''
delete videoMessage.imageUrl // 不要修改原本的图片地址
this.bookTaskDetailService.UpdateBookTaskDetailVideoMessage(
task.bookTaskDetailId as string,
videoMessage
)
// 添加任务成功 返回前端任务事件
SendReturnMessage(
{
code: 1,
id: task.bookTaskDetailId as string,
message: t('已成功提交{type}图转视频任务任务ID{taskId}', { type: t("可灵"), taskId: taskId }),
type: ResponseMessageType.KLING_VIDEO_EXTEND,
data: JSON.stringify(videoMessage)
},
task.messageName as string
)
await this.FecthKlingImageToVideoResult(bookTaskDetail, task, taskId, false)
return successMessage(
t('可灵视频延长任务完成!'),
'MJVideoService_MJImageToVideo'
)
} catch (error) {
throw new Error(t('可灵视频延长任务失败,失败信息:{error}', { error: (error as Error).message }));
}
}
//#endregion
//#region FecthKlingImageToVideoResult
/**
*
*
*
*
*
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {TaskModal.Task} task -
* @param {string} taskId -
* @param {boolean} useTransfer - 使使
*
* @throws {Error}
* @throws {Error} API调用失败时抛出错误
*
* @description
*
* 1. URL并发送GET请求获取任务状态
* 2.
* 3.
* - "submitted" | "processing": KlingTaskProcessing处理中间状态
* - "succeed": KlingTaskSuccessed处理成功状态并结束轮询
* - "failed": KlingTaskFailed处理失败状态并结束轮询
* - 其他状态: 抛出未知状态错误
* 4. 20
* 5. /退
*
* @note
* - 退
* - 20KlingTaskProcessing中实现
* -
* -
*
* @example
* ```typescript
* await this.FecthKlingImageToVideoResult(
* bookTaskDetail,
* task,
* 'kling_task_123456',
* false
* );
* ```
*/
async FecthKlingImageToVideoResult(
bookTaskDetail: Book.SelectBookTaskDetail,
task: TaskModal.Task,
taskId: string,
useTransfer: boolean = false) {
console.log(useTransfer)
while (true) {
let fetchUrl = this.inferenceSetting.apiProviderItem.base_url + '/kling/v1/videos/image2video/' + taskId;
let res = await axios.get(fetchUrl, {
headers: {
Authorization: `Bearer ${this.inferenceSetting.apiToken}`,
'Content-Type': 'application/json'
}
});
let resData = res.data.data;
let status = resData.task_status;
if (status === "submitted" || status === "processing") {
await this.KlingTaskProcessing(bookTaskDetail, task, taskId, resData)
} else if (status === "succeed") {
await this.KlingTaskSuccessed(bookTaskDetail, task, taskId, resData)
break
} else if (status === "failed") {
// 任务失败
await this.KlingTaskFailed(bookTaskDetail, task, taskId, resData)
break
} else {
// 未知状态
throw new Error(t("未知状态"));
}
}
}
//#endregion
//#region fetck video extend res
/**
*
*
*
* 使API端点
*
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {TaskModal.Task} task -
* @param {string} taskId -
* @param {boolean} useTransfer - 使使
*
* @throws {Error}
* @throws {Error} API调用失败时抛出错误
*
* @description
*
* 1. URL/kling/v1/videos/video-extend/{taskId}
* 2. GET请求获取任务状态信息
* 3.
* 4.
* - "submitted" | "processing": KlingTaskProcessing处理中间状态
* - "succeed": KlingTaskSuccessed处理成功状态并结束轮询
* - "failed": KlingTaskFailed处理失败状态并结束轮询
* - 其他状态: 抛出未知状态错误
* 5. 20
* 6. /退
*
* @note
* - 退
* - 20KlingTaskProcessing中实现
* - 使API路径
* -
* -
*
* @example
* ```typescript
* await this.FecthKlingVideoExtendResult(
* bookTaskDetail,
* task,
* 'kling_extend_task_123456',
* false
* );
* ```
*
* @see KlingTaskProcessing -
* @see KlingTaskSuccessed -
* @see KlingTaskFailed -
*/
async FecthKlingVideoExtendResult(
bookTaskDetail: Book.SelectBookTaskDetail,
task: TaskModal.Task,
taskId: string,
useTransfer: boolean = false
) {
console.log(useTransfer)
while (true) {
let fetchUrl = this.inferenceSetting.apiProviderItem.base_url + '/kling/v1/videos/video-extend/' + taskId;
let res = await axios.get(fetchUrl, {
headers: {
Authorization: `Bearer ${this.inferenceSetting.apiToken}`,
'Content-Type': 'application/json'
}
});
let resData = res.data.data;
let status = resData.task_status;
if (status === "submitted" || status === "processing") {
await this.KlingTaskProcessing(bookTaskDetail, task, taskId, resData)
} else if (status === "succeed") {
await this.KlingTaskSuccessed(bookTaskDetail, task, taskId, resData)
break
} else if (status === "failed") {
// 任务失败
await this.KlingTaskFailed(bookTaskDetail, task, taskId, resData)
break
} else {
// 未知状态
throw new Error(t("未知状态"));
}
}
}
//#endregion
//#region get kling option
/**
*
*
*
*
*
* @param {string} bookTaskDetailId -
*
* @returns {Promise<{klingOption: BookTaskDetail.klingOptions, videoMessage: BookTaskDetail.VideoMessage}>}
*
* - klingOption: 解析后的可灵视频配置参数
* - videoMessage: 完整的视频消息配置对象
*
* @throws {Error}
* @throws {Error}
* @throws {Error} JSON格式无效时
*
* @description
*
* 1. ID查询数据库获取完整数据
* 2.
* 3. JSON格式
* 4. JSON字符串为可灵配置对象
* 5.
*
* @note
* -
* -
* - API调用提供完整的参数基础
* -
*
* @example
* ```typescript
* const { klingOption, videoMessage } = await klingService.GetKlingOptions(
* 'task_detail_123456'
* );
* console.log('模型名称:', klingOption.model_name);
* console.log('视频状态:', videoMessage.status);
* ```
*/
async GetKlingOptions(bookTaskDetailId: string): Promise<{ bookTaskDetail: Book.SelectBookTaskDetail, klingOption: BookTaskDetail.klingOptions, videoMessage: BookTaskDetail.VideoMessage }> {
// 开始处理小说数据
let bookTaskDetail = await this.bookTaskDetailService.GetBookTaskDetailDataById(bookTaskDetailId, true);
// 获取视频配置信息
let videoMessage = bookTaskDetail.videoMessage
if (videoMessage == null || videoMessage == undefined) {
throw new Error(t("小说批次任务的分镜数据的转视频配置为空,请检查"))
}
// 获取 MJ Video 的options
let klingOptionsString = bookTaskDetail.videoMessage?.klingOptions as string
if (!ValidateJson(klingOptionsString)) {
throw new Error(t("当前分镜数据的可灵图转视频参数为空或参数校验失败,请检查"))
}
let klingOptions = JSON.parse(klingOptionsString) as BookTaskDetail.klingOptions
return { bookTaskDetail: bookTaskDetail, klingOption: klingOptions, videoMessage: videoMessage }
}
//#endregion
//#region Kling Task Status Handle
/**
*
*
*
*
*
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {TaskModal.Task} task -
* @param {string} taskId - ID
* @param {any} resData - API返回的响应数据
*
* @throws {Error}
*
* @description
*
* 1.
* 2.
* 3.
* 4.
* 5.
*/
async KlingTaskFailed(bookTaskDetail: Book.SelectBookTaskDetail,
task: TaskModal.Task,
taskId: string,
resData: any) {
// 修改小说分镜的 videoMessage
let videoMessage = cloneDeep(bookTaskDetail.videoMessage) ?? {}
videoMessage.status = VideoStatus.FAIL
videoMessage.msg = resData.task_status_msg
videoMessage.taskId = taskId
videoMessage.messageData = JSON.stringify(resData)
delete videoMessage.imageUrl
// 修改 videoMessage数据
this.bookTaskDetailService.UpdateBookTaskDetailVideoMessage(
bookTaskDetail.id as string,
videoMessage
)
// 修改TASK
this.taskListService.UpdateBackTaskData(task.id as string, {
taskId: taskId,
taskMessage: JSON.stringify(resData)
})
// 返回前端数据
SendReturnMessage(
{
code: 0,
id: bookTaskDetail.id as string,
message: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ?
t("可灵视频延长任务失败,失败信息:{error}", {
error: resData.task_status_msg
}) :
t('可灵图转视频任务失败,失败信息:{error}', {
error: resData.task_status_msg
}),
type: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ? ResponseMessageType.KLING_VIDEO_EXTEND : ResponseMessageType.KLING_VIDEO,
data: JSON.stringify(videoMessage)
},
task.messageName as string
)
throw new Error(resData.task_status_msg)
}
/**
*
*
*
* URL
*
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {TaskModal.Task} task -
* @param {string} taskId - ID
* @param {any} resData - API返回的成功响应数据URLs
*
* @description
*
* 1.
* 2. URL列表
* 3.
* 4.
* 5.
* 6.
*
* @note
* -
* -
* - 使ID作为下载文件的前缀标识
*/
async KlingTaskSuccessed(bookTaskDetail: Book.SelectBookTaskDetail,
task: TaskModal.Task,
taskId: string,
resData: any) {
// 任务成功 修改 videoMessage
let videoMessage = cloneDeep(bookTaskDetail.videoMessage) ?? {}
videoMessage.status = VideoStatus.SUCCESS
videoMessage.taskId = taskId
let klingVideoIds: string[] = []
if (resData.task_result && resData.task_result.videos && resData.task_result.videos.length > 0) {
videoMessage.videoUrls = []
resData.task_result.videos.forEach((item: any) => {
videoMessage.videoUrls?.push(item.url)
klingVideoIds.push(item.id);
})
}
videoMessage.messageData = JSON.stringify(resData)
delete videoMessage.imageUrl
this.bookTaskDetailService.UpdateBookTaskDetailVideoMessage(
task.bookTaskDetailId as string,
videoMessage
)
// 修改小说分镜状态
this.bookTaskDetailService.ModifyBookTaskDetailById(task.bookTaskDetailId as string, {
status: BookTaskStatus.IMAGE_TO_VIDEO_SUCCESS
})
// 修改任务状态
this.taskListService.UpdateBackTaskData(task.id as string, {
status: BookBackTaskStatus.DONE,
taskId: taskId,
taskMessage: JSON.stringify(resData)
})
let klingId = resData.task_id ?? new Date().getTime().toString()
// 下载 视频
await this.DownloadVideoUrls(videoMessage.videoUrls || [], task, bookTaskDetail, klingId, klingVideoIds)
SendReturnMessage(
{
code: 1,
id: bookTaskDetail.id as string,
message: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ?
t('可灵视频延长任务完成!') : t('Kling图转视频任务完成'),
type: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ? ResponseMessageType.KLING_VIDEO_EXTEND : ResponseMessageType.KLING_VIDEO,
data: JSON.stringify(videoMessage)
},
task.messageName as string
)
}
/**
*
*
*
*
*
* @param {Book.SelectBookTaskDetail} bookTaskDetail -
* @param {TaskModal.Task} task -
* @param {string} taskId - ID
* @param {any} resData - API返回的处理中状态数据
*
* @description
*
* 1.
* 2.
* 3.
* 4.
* 5. 20
*
* @note
* - 20
* -
* - "submitted"()"processing"()
*/
async KlingTaskProcessing(
bookTaskDetail: Book.SelectBookTaskDetail,
task: TaskModal.Task,
taskId: string,
resData: any) {
// 任务执行中或者是提交成功
let videoMessage = cloneDeep(bookTaskDetail.videoMessage) ?? {}
videoMessage.status = VideoStatus.PROCESSING
videoMessage.taskId = taskId
videoMessage.messageData = JSON.stringify(resData)
delete videoMessage.imageUrl
this.bookTaskDetailService.UpdateBookTaskDetailVideoMessage(
task.bookTaskDetailId as string,
videoMessage
)
SendReturnMessage(
{
code: 1,
id: bookTaskDetail.id as string,
message: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ?
t('可灵视频延长任务正在执行中...') : t('可灵图转视频任务执行中...'),
type: task.type == BookBackTaskType.KLING_VIDEO_EXTEND ? ResponseMessageType.KLING_VIDEO_EXTEND : ResponseMessageType.KLING_VIDEO,
data: JSON.stringify(videoMessage)
},
task.messageName as string
)
// 没有成功 等待二十秒后继续执行
await new Promise((resolve) => setTimeout(resolve, 20000))
}
//#endregion
}

View File

@ -152,7 +152,7 @@ export class MJVideoService extends MJApiService {
{ {
code: 1, code: 1,
id: task.bookTaskDetailId as string, id: task.bookTaskDetailId as string,
message: t('已成功提交Midjourney图转视频任务任务ID{taskId}', { taskId: id }), message: t('已成功提交{type}图转视频任务任务ID{taskId}', { type: 'Midjourney', taskId: id }),
type: ResponseMessageType.MJ_VIDEO, type: ResponseMessageType.MJ_VIDEO,
data: JSON.stringify(videoMessage) data: JSON.stringify(videoMessage)
}, },
@ -275,7 +275,7 @@ export class MJVideoService extends MJApiService {
{ {
code: 1, code: 1,
id: task.bookTaskDetailId as string, id: task.bookTaskDetailId as string,
message: t('已成功提交Midjourney图转视频任务任务ID{taskId}', { taskId: id }), message: t('已成功提交{type}图转视频任务任务ID{taskId}', { type: 'Midjourney Extend', taskId: id }),
type: ResponseMessageType.MJ_VIDEO, type: ResponseMessageType.MJ_VIDEO,
data: JSON.stringify(videoMessage) data: JSON.stringify(videoMessage)
}, },

View File

@ -21,6 +21,7 @@ declare module 'vue' {
ComfyUIAddWorkflow: typeof import('./src/components/Setting/ComfyUIAddWorkflow.vue')['default'] ComfyUIAddWorkflow: typeof import('./src/components/Setting/ComfyUIAddWorkflow.vue')['default']
ComfyUISetting: typeof import('./src/components/Setting/ComfyUISetting.vue')['default'] ComfyUISetting: typeof import('./src/components/Setting/ComfyUISetting.vue')['default']
CommonDialog: typeof import('./src/components/common/CommonDialog.vue')['default'] CommonDialog: typeof import('./src/components/common/CommonDialog.vue')['default']
ConfigOptionGroup: typeof import('./src/components/common/ConfigOptionGroup.vue')['default']
ContactDeveloper: typeof import('./src/components/SoftHome/ContactDeveloper.vue')['default'] ContactDeveloper: typeof import('./src/components/SoftHome/ContactDeveloper.vue')['default']
CopyWritingCategoryMenu: typeof import('./src/components/CopyWriting/CopyWritingCategoryMenu.vue')['default'] CopyWritingCategoryMenu: typeof import('./src/components/CopyWriting/CopyWritingCategoryMenu.vue')['default']
CopyWritingContent: typeof import('./src/components/CopyWriting/CopyWritingContent.vue')['default'] CopyWritingContent: typeof import('./src/components/CopyWriting/CopyWritingContent.vue')['default']
@ -34,6 +35,7 @@ declare module 'vue' {
DatatableGenerateImageAction: typeof import('./src/components/Original/BookTaskDetail/DatatableGenerateImageAction.vue')['default'] DatatableGenerateImageAction: typeof import('./src/components/Original/BookTaskDetail/DatatableGenerateImageAction.vue')['default']
DataTableGptPrompt: typeof import('./src/components/Original/BookTaskDetail/DataTableGptPrompt.vue')['default'] DataTableGptPrompt: typeof import('./src/components/Original/BookTaskDetail/DataTableGptPrompt.vue')['default']
DatatableHeaderCharacter: typeof import('./src/components/Original/BookTaskDetail/DatatableHeaderCharacter.vue')['default'] DatatableHeaderCharacter: typeof import('./src/components/Original/BookTaskDetail/DatatableHeaderCharacter.vue')['default']
DatatableHeaderGptPrompt: typeof import('./src/components/Original/BookTaskDetail/DatatableHeaderGptPrompt.vue')['default']
DatatableHeaderImage: typeof import('./src/components/Original/BookTaskDetail/DatatableHeaderImage.vue')['default'] DatatableHeaderImage: typeof import('./src/components/Original/BookTaskDetail/DatatableHeaderImage.vue')['default']
DisabledWrapper: typeof import('./src/components/common/DisabledWrapper.vue')['default'] DisabledWrapper: typeof import('./src/components/common/DisabledWrapper.vue')['default']
DocHelp: typeof import('./src/components/DocHelp.vue')['default'] DocHelp: typeof import('./src/components/DocHelp.vue')['default']
@ -50,6 +52,8 @@ declare module 'vue' {
InputDialogContent: typeof import('./src/components/common/InputDialogContent.vue')['default'] InputDialogContent: typeof import('./src/components/common/InputDialogContent.vue')['default']
JianyingGenerateInformation: typeof import('./src/components/Original/BookTaskDetail/JianyingGenerateInformation.vue')['default'] JianyingGenerateInformation: typeof import('./src/components/Original/BookTaskDetail/JianyingGenerateInformation.vue')['default']
JianyingKeyFrameSetting: typeof import('./src/components/Setting/JianyingKeyFrameSetting.vue')['default'] JianyingKeyFrameSetting: typeof import('./src/components/Setting/JianyingKeyFrameSetting.vue')['default']
KlingImageToVideoInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoKling/KlingImageToVideoInfo.vue')['default']
KlingVideoExtendInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoKling/KlingVideoExtendInfo.vue')['default']
LanguageSwitcher: typeof import('./src/components/common/LanguageSwitcher.vue')['default'] LanguageSwitcher: typeof import('./src/components/common/LanguageSwitcher.vue')['default']
LoadingComponent: typeof import('./src/components/common/LoadingComponent.vue')['default'] LoadingComponent: typeof import('./src/components/common/LoadingComponent.vue')['default']
ManageAISetting: typeof import('./src/components/CopyWriting/ManageAISetting.vue')['default'] ManageAISetting: typeof import('./src/components/CopyWriting/ManageAISetting.vue')['default']
@ -57,15 +61,16 @@ declare module 'vue' {
MediaToVideoInfoConfig: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoConfig.vue')['default'] MediaToVideoInfoConfig: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoConfig.vue')['default']
MediaToVideoInfoEmptyState: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoEmptyState.vue')['default'] MediaToVideoInfoEmptyState: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoEmptyState.vue')['default']
MediaToVideoInfoHome: typeof import('./src/components/MediaToVideo/MediaToVideoInfoHome.vue')['default'] MediaToVideoInfoHome: typeof import('./src/components/MediaToVideo/MediaToVideoInfoHome.vue')['default']
MediaToVideoInfoKlingVideoInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoKling/MediaToVideoInfoKlingVideoInfo.vue')['default']
MediaToVideoInfoMJVideoExtend: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoExtend.vue')['default'] MediaToVideoInfoMJVideoExtend: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoExtend.vue')['default']
MediaToVideoInfoMJVideoImageToVideo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoImageToVideo.vue')['default'] MediaToVideoInfoMJVideoImageToVideo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoImageToVideo.vue')['default']
MediaToVideoInfoMJVideoInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoInfo.vue')['default'] MediaToVideoInfoMJVideoInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoInfo.vue')['default']
MediaToVideoInfoMJVideoSelectParentTask: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoMJVideo/MediaToVideoInfoMJVideoSelectParentTask.vue')['default']
MediaToVideoInfoTaskDetail: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskDetail.vue')['default'] MediaToVideoInfoTaskDetail: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskDetail.vue')['default']
MediaToVideoInfoTaskList: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskList.vue')['default'] MediaToVideoInfoTaskList: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskList.vue')['default']
MediaToVideoInfoTaskOptions: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskOptions.vue')['default'] MediaToVideoInfoTaskOptions: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoTaskOptions.vue')['default']
MediaToVideoInfoVideoConfig: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoVideoConfig.vue')['default'] MediaToVideoInfoVideoConfig: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoVideoConfig.vue')['default']
MediaToVideoInfoVideoListInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoVideoListInfo.vue')['default'] MediaToVideoInfoVideoListInfo: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoInfoVideoListInfo.vue')['default']
MediaToVideoSelectParentTask: typeof import('./src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoSelectParentTask.vue')['default']
MenuOpenRound: typeof import('./src/components/common/Icon/MenuOpenRound.vue')['default'] MenuOpenRound: typeof import('./src/components/common/Icon/MenuOpenRound.vue')['default']
MessageAndProgress: typeof import('./src/components/Original/BookTaskDetail/MessageAndProgress.vue')['default'] MessageAndProgress: typeof import('./src/components/Original/BookTaskDetail/MessageAndProgress.vue')['default']
MJAccountDialog: typeof import('./src/components/Setting/MJSetting/MJAccountDialog.vue')['default'] MJAccountDialog: typeof import('./src/components/Setting/MJSetting/MJAccountDialog.vue')['default']
@ -116,6 +121,8 @@ declare module 'vue' {
NSpace: typeof import('naive-ui')['NSpace'] NSpace: typeof import('naive-ui')['NSpace']
NSpin: typeof import('naive-ui')['NSpin'] NSpin: typeof import('naive-ui')['NSpin']
NSwitch: typeof import('naive-ui')['NSwitch'] NSwitch: typeof import('naive-ui')['NSwitch']
NTabPane: typeof import('naive-ui')['NTabPane']
NTabs: typeof import('naive-ui')['NTabs']
NTag: typeof import('naive-ui')['NTag'] NTag: typeof import('naive-ui')['NTag']
NText: typeof import('naive-ui')['NText'] NText: typeof import('naive-ui')['NText']
NTooltip: typeof import('naive-ui')['NTooltip'] NTooltip: typeof import('naive-ui')['NTooltip']
@ -132,6 +139,7 @@ declare module 'vue' {
OriginalTaskList: typeof import('./src/components/Original/MainHome/OriginalTaskList.vue')['default'] OriginalTaskList: typeof import('./src/components/Original/MainHome/OriginalTaskList.vue')['default']
OriginalViewBookInfo: typeof import('./src/components/Original/MainHome/OriginalViewBookInfo.vue')['default'] OriginalViewBookInfo: typeof import('./src/components/Original/MainHome/OriginalViewBookInfo.vue')['default']
OriginalViewBookTaskInfo: typeof import('./src/components/Original/MainHome/OriginalViewBookTaskInfo.vue')['default'] OriginalViewBookTaskInfo: typeof import('./src/components/Original/MainHome/OriginalViewBookTaskInfo.vue')['default']
PointRightIcon: typeof import('./src/components/common/Icon/PointRightIcon.vue')['default']
PresetShowCard: typeof import('./src/components/Preset/PresetShowCard.vue')['default'] PresetShowCard: typeof import('./src/components/Preset/PresetShowCard.vue')['default']
RouterLink: typeof import('vue-router')['RouterLink'] RouterLink: typeof import('vue-router')['RouterLink']
RouterView: typeof import('vue-router')['RouterView'] RouterView: typeof import('vue-router')['RouterView']

View File

@ -139,77 +139,7 @@
<!-- Kling 类型 --> <!-- Kling 类型 -->
<div v-else-if="selectedVideoType === 'KLING'" class="info-content"> <div v-else-if="selectedVideoType === 'KLING'" class="info-content">
<div class="info-section"> <MediaToVideoInfoKlingVideoInfo :task="task" />
<div class="info-item">
<div class="info-label">
<n-icon size="16">
<svg viewBox="0 0 24 24">
<path
fill="currentColor"
d="M12,2A10,10 0 0,0 2,12A10,10 0 0,0 12,22A10,10 0 0,0 22,12A10,10 0 0,0 12,2M11,16.5L6.5,12L7.91,10.59L11,13.67L16.59,8.09L18,9.5L11,16.5Z"
/>
</svg>
</n-icon>
<span>模型</span>
</div>
<div class="info-value">
<n-tag size="small" type="info">{{ klingOptions?.model || 'kling-v1' }}</n-tag>
</div>
</div>
<div class="info-item">
<div class="info-label">
<n-icon size="16">
<svg viewBox="0 0 24 24">
<path
fill="currentColor"
d="M12,2L13.09,8.26L22,9L13.09,9.74L12,16L10.91,9.74L2,9L10.91,8.26L12,2Z"
/>
</svg>
</n-icon>
<span>模式</span>
</div>
<div class="info-value">
<n-tag size="small" :type="klingOptions?.mode === 'pro' ? 'success' : 'info'">
{{ klingOptions?.mode === 'pro' ? '高表现' : '高性能' }}
</n-tag>
</div>
</div>
<div class="info-item">
<div class="info-label">
<n-icon size="16">
<svg viewBox="0 0 24 24">
<path
fill="currentColor"
d="M12,2A10,10 0 0,0 2,12A10,10 0 0,0 12,22A10,10 0 0,0 22,12A10,10 0 0,0 12,2M12,17A5,5 0 0,1 7,12A5,5 0 0,1 12,7A5,5 0 0,1 17,12A5,5 0 0,1 12,17Z"
/>
</svg>
</n-icon>
<span>时长</span>
</div>
<div class="info-value">
<n-text>{{ klingOptions?.duration || 5 }}</n-text>
</div>
</div>
<div class="info-item" v-if="klingOptions?.cfg_scale">
<div class="info-label">
<n-icon size="16">
<svg viewBox="0 0 24 24">
<path
fill="currentColor"
d="M19,3H5C3.89,3 3,3.89 3,5V19A2,2 0 0,0 5,21H19A2,2 0 0,0 21,19V5C21,3.89 20.1,3 19,3M19,5V19H5V5H19Z"
/>
</svg>
</n-icon>
<span>提示词相关性</span>
</div>
<div class="info-value">
<n-text>{{ klingOptions.cfg_scale }}</n-text>
</div>
</div>
</div>
</div> </div>
<!-- Midjourney 类型 --> <!-- Midjourney 类型 -->

View File

@ -0,0 +1,184 @@
<template>
<n-space vertical :size="20" style="width: 100%">
<ConfigOptionGroup
v-model:value="videoMessage.klingOptionsObject"
:options="klingOptions"
@change="handleConfigChange"
/>
<div style="display: flex; gap: 12px; width: 100%">
<TooltipButton
:tooltip="t('将当前转视频的基础设置批量应用到所有的分镜中')"
type="default"
size="small"
@click="handleBatchSettings"
style="width: 100px"
>
<template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M3,6V8H21V6H3M3,11V13H21V11H3M3,16V18H21V16H3Z" />
</svg>
</n-icon>
</template>
{{ t('应用设置') }}
</TooltipButton>
<n-button type="primary" size="small" @click="handleImageToVideo" style="flex: 1">
<template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" />
</svg>
</n-icon>
</template>
{{ t('生成视频') }}
</n-button>
</div>
</n-space>
</template>
<script setup>
import { computed } from 'vue'
import ConfigOptionGroup from '@/renderer/src/components/common/ConfigOptionGroup.vue'
import TooltipButton from '@/renderer/src/components/common/TooltipButton.vue'
import {
GetKlingDurationOptions,
GetKlingModelNameOptions,
GetKlingModeOptions
} from '@/define/enum/video'
import { t } from '@/i18n'
import { useFile } from '@/renderer/src/hooks/useFile'
const { UploadImageToLaiTool } = useFile()
const props = defineProps({
task: {
type: Object,
required: true
},
videoMessage: {
type: Object,
required: true
}
})
const emit = defineEmits(['update-kling-options', 'batch-settings', 'image-to-video'])
//
const handleConfigChange = (key, value, newValue) => {
//
emit('update-kling-options', key, value, newValue)
console.log('Kling options changed:', key, value, newValue)
}
//
async function handleBatchSettings() {
emit('batch-settings')
}
//
async function handleImageToVideo() {
emit('image-to-video')
}
//
async function handleImageUpload(key, imagePath) {
const url = await UploadImageToLaiTool(imagePath, 'video')
if (url) {
const newValue = {
...props.videoMessage.klingOptionsObject,
[key]: url
}
//
emit('update-kling-options', key, url, newValue)
}
}
const klingOptions = computed(() => [
{
key: 'model_name',
label: t('模型名称'),
type: 'select',
options: GetKlingModelNameOptions(),
placeholder: t('请选择 {data}', { data: t('模型名称') }),
width: '180px'
},
{
key: 'image',
label: t('参考图像'),
type: 'image',
placeholder: t('请输入 {data}', { data: t('图片链接') }),
fullWidth: true, //
onUpload: handleImageUpload,
required: true,
tooltip: t(
'<strong>必须</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间'
)
},
{
key: 'image_tail',
label: t('参考图像 - 尾帧控制'),
type: 'image',
fullWidth: true, //
onUpload: handleImageUpload,
required: false,
placeholder: t('请输入 {data}', { data: t('图片链接') }),
tooltip: t(
'<strong>可选</strong><br/><br/>• 支持格式:<strong>.jpg/.jpeg/.png</strong><br/>• 文件大小:<strong>不超过10MB</strong><br/>• 分辨率:<strong>不小于300*300px</strong><br/>• 宽高比:<strong>1:2.5 ~ 2.5:1</strong>之间'
)
},
{
key: 'prompt',
label: t('正向提示词'),
type: 'input',
inputType: 'textarea',
autosize: { minRows: 3, maxRows: 3 },
fullWidth: true, //
placeholder: t('请输入 {data}', { data: t('正向提示词') }),
tooltip: t('正向文本提示可选不能超过2500个字符')
},
{
key: 'negative_prompt',
label: t('反向提示词'),
type: 'input',
inputType: 'textarea',
autosize: { minRows: 3, maxRows: 3 },
fullWidth: true, //
placeholder: t('请输入 {data}', { data: t('反向提示词') }),
tooltip: t('负向文本提示可选不能超过2500个字符')
},
{
key: 'cfg_scale',
label: t('自由度'),
type: 'number',
min: 0,
max: 1,
step: 0.1,
placeholder: t('0-1之间'),
width: '120px',
tooltip: t(
'生成视频的自由度<br/><br/><strong>值越大,模型自由度越小</strong><br/>与用户输入的提示词相关性越强<br/><br/>取值范围:<strong>[0, 1]</strong>'
)
},
{
key: 'mode',
label: t('生成模式'),
type: 'select',
options: GetKlingModeOptions(),
width: '180px',
tooltip: t(
'生成视频的模式<br/><br/>枚举值stdpro<br/>其中std标准模式标准基础模式性价比高<br/>其中pro专家模式高品质高表现模式生成视频质量更佳'
)
},
{
key: 'duration',
label: t('视频时长'),
type: 'select',
options: GetKlingDurationOptions(),
width: '100px',
tooltip: t('生成视频时长单位s5秒或10秒')
}
])
</script>

View File

@ -0,0 +1,118 @@
<template>
<n-space vertical :size="20" style="width: 100%">
<ConfigOptionGroup
v-model:value="videoMessage.klingOptionsObject"
:options="klingOptions"
@change="handleConfigChange"
/>
<div style="display: flex; gap: 12px; width: 100%">
<n-button type="primary" size="small" @click="handleVideoExtend" style="flex: 1">
<template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" />
</svg>
</n-icon>
</template>
{{ t('视频延长') }}
</n-button>
</div>
</n-space>
</template>
<script setup>
import { computed } from 'vue'
import ConfigOptionGroup from '@/renderer/src/components/common/ConfigOptionGroup.vue'
import { t } from '@/i18n'
import PointRightIcon from '../../../common/Icon/PointRightIcon.vue'
const props = defineProps({
task: {
type: Object,
required: true
},
videoMessage: {
type: Object,
required: true
}
})
const emit = defineEmits([
'update-kling-options',
'batch-settings',
'video-extend',
'select-parent-task'
])
//
async function handleVideoExtend() {
emit('video-extend')
}
// Kling
const klingOptions = computed(() => [
{
key: 'prompt',
label: t('正向提示词'),
type: 'input',
inputType: 'textarea',
autosize: { minRows: 2, maxRows: 4 },
placeholder: t('请输入 {data}', { data: t('正向提示词') }),
fullWidth: true //
},
{
key: 'negative_prompt',
label: t('反向提示词'),
type: 'input',
inputType: 'textarea',
autosize: { minRows: 2, maxRows: 3 },
placeholder: t('请输入 {data}', { data: t('反向提示词') }),
fullWidth: true, //
tooltip: t('负向文本提示可选不能超过2500个字符')
},
{
key: 'video_id',
label: t('视频ID'),
type: 'input',
suffixButton: {
icon: PointRightIcon,
tooltip: t('选择已有的视频任务作为延长的基础'),
text: t('选择Video'),
width: '120px',
onClick: () => {
emit('select-parent-task')
}
},
placeholder: t('请选择 {data}', { data: t('视频ID') }),
fullWidth: true, //
tooltip: t('支持通过文本、图片和视频延长生成的视频的ID原视频不能超过3分钟')
},
{
key: 'task_id',
label: t('任务ID'),
type: 'input',
placeholder: t('请选择 {data}', { data: t('任务ID') }),
fullWidth: true, //
tooltip: t('对应于视频ID的任务ID通常在选择视频后自动填充')
},
{
key: 'cfg_scale',
label: t('自由度'),
type: 'number',
min: 0,
max: 1,
step: 0.1,
placeholder: t('0-1之间'),
width: '120px',
tooltip: t(
'生成视频的自由度<br/><br/><strong>值越大,模型自由度越小</strong><br/>与用户输入的提示词相关性越强<br/><br/>取值范围:<strong>[0, 1]</strong>'
)
}
])
//
function handleConfigChange(key, value, newValue) {
emit('update-kling-options', key, value, newValue)
}
</script>

View File

@ -0,0 +1,288 @@
<template>
<div class="mj-video-container">
<n-tabs v-model:value="activeTab" type="segment" size="small">
<!-- 图生视频 Tab -->
<n-tab-pane name="image-to-video" :tab="t('图生视频')">
<KlingImageToVideoInfo
:task="props.task"
:video-message="videoMessage"
@update-kling-options="handleKlingOptionsUpdate"
@batch-settings="handleBatchSettings"
@image-to-video="handleImageToVideo(BookBackTaskType.KLING_VIDEO)"
/>
</n-tab-pane>
<!-- 视频拓展 Tab -->
<n-tab-pane name="video-extend" :tab="t('视频延长')">
<KlingVideoExtendInfo
:task="props.task"
:video-message="videoMessage"
@update-kling-options="handleKlingOptionsUpdate"
@batch-settings="handleBatchSettings"
@select-parent-task="handleSelectParentTask"
@video-extend="handleImageToVideo(BookBackTaskType.KLING_VIDEO_EXTEND)"
/>
</n-tab-pane>
</n-tabs>
<!-- 选择父任务的 Modal 弹窗 -->
<n-modal
v-model:show="showParentTaskModal"
:mask-closable="false"
preset="card"
style="width: 90%; max-width: 1200px"
:title="t('选择父任务')"
size="huge"
:content-style="{ padding: '8px 16px' }"
:segmented="true"
>
<ImageToVideoSelectParentTask
:taskData="props.task"
:videoList="subVideoPathObject"
@save-selection="handleParentTaskSelection"
/>
</n-modal>
</div>
</template>
<script setup>
import { ref, computed, h } from 'vue'
import { useMessage, useDialog } from 'naive-ui'
import KlingImageToVideoInfo from './KlingImageToVideoInfo.vue'
import KlingVideoExtendInfo from './KlingVideoExtendInfo.vue'
import { t } from '@/i18n'
import { ValidateJsonAndParse } from '@/define/Tools/validate'
import { KlingDuration, KlingMode, KlingModelName } from '@/define/enum/video'
import { useSoftwareStore, useBookStore } from '@/renderer/src/stores'
import ImageToVideoSelectParentTask from '@/renderer/src/components/MediaToVideo/MediaToVideoInfo/MediaToVideoSelectParentTask.vue'
import { ImageToVideoModels } from '@/define/enum/video'
import { isEmpty } from 'lodash'
import { BookBackTaskType, TaskExecuteType } from '@/define/enum/bookEnum'
import { DEFINE_STRING } from '@/define/ipcDefineString'
import { AddOneTask } from '@/renderer/src/common/task'
const message = useMessage()
const dialog = useDialog()
const softwareStore = useSoftwareStore()
const bookStore = useBookStore()
const showParentTaskModal = ref(false)
//
const subVideoPathObject = computed(() => {
return props.task?.subVideoPathObject.filter(
(video) =>
!isEmpty(video.localPath) &&
(video.type == ImageToVideoModels.KLING || video.type == ImageToVideoModels.KLING_VIDEO_EXTEND)
)
})
const props = defineProps({
task: {
type: Object,
required: true
}
})
//
const videoMessage = computed(() => {
console.log('MediaToVideoInfoKlingVideoInfo props.task', props.task, props.task?.videoMessage)
let videoMessage = props.task?.videoMessage || {}
let klingVideoOptionsString = videoMessage.klingOptions || '{}'
let klingVideoOptions = ValidateJsonAndParse(klingVideoOptionsString)
// klingOptions
const cleanKlingVideoOptions = {
model_name: klingVideoOptions.model_name || KlingModelName.KLING_V2_1,
image: videoMessage.imageUrl || klingVideoOptions.image || '',
image_tail: klingVideoOptions.image_tail || '',
prompt: klingVideoOptions.prompt || '',
negative_prompt: klingVideoOptions.negative_prompt || '',
cfg_scale: klingVideoOptions.cfg_scale ?? 0.5,
mode: klingVideoOptions.mode || KlingMode.STD,
duration: klingVideoOptions.duration ?? KlingDuration.FIVE,
video_id: klingVideoOptions.video_id || '',
task_id: klingVideoOptions.task_id || ''
}
videoMessage.klingOptionsObject = cleanKlingVideoOptions
console.log(
'MediaToVideoInfoKlingVideoInfo videoMessage',
videoMessage,
videoMessage.klingOptionsObject
)
return videoMessage
})
// Kling
async function handleKlingOptionsUpdate(key, value, newOptions) {
//
let updateData = {
klingOptions: JSON.stringify(newOptions),
imageUrl: newOptions.image // imageUrl
}
let res = await window.book.video.UpdateBookTaskDetailVideoMessage(props.task.id, updateData)
if (res.code !== 1) {
message.error(
t('保存失败:{error}', {
error: res.message
}) + `, Key: ${key}`
)
return
}
// tab
if (!props.task.videoMessage) {
props.task.videoMessage = {}
}
Object.assign(props.task.videoMessage, updateData)
// klingOptionsObject
videoMessage.value.klingOptionsObject = {
...videoMessage.value.klingOptionsObject,
...newOptions
}
}
//
async function handleBatchSettings() {
//
//
let da = dialog.warning({
title: t('操作确认'),
content: () =>
h(
'div',
{
style: {
whiteSpace: 'pre-line'
}
},
{
default: () =>
t(
'是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置模型名称Model Name生成模式Mode视频时长Duration自由度CFG Scale \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续'
)
}
),
positiveText: t('确认'),
negativeText: t('取消'),
closable: true,
onPositiveClick: async () => {
da.destroy()
try {
softwareStore.spin.spinning = true
softwareStore.spin.tip = t('正在批量应用当前设置...')
//
for (let i = 0; i < bookStore.selectBookTaskDetail.length; i++) {
const element = bookStore.selectBookTaskDetail[i]
let updateObject = {}
//
let elementVideoMessage = element?.videoMessage || {}
let elementKlingOptionsString = elementVideoMessage.klingOptions || '{}'
let elementKlingOptions = ValidateJsonAndParse(elementKlingOptionsString)
// Kling
elementKlingOptions.model_name =
videoMessage.value.klingOptionsObject.model_name || KlingModelName.KLING_V2_1
elementKlingOptions.mode = videoMessage.value.klingOptionsObject.mode || KlingMode.STD
elementKlingOptions.duration =
videoMessage.value.klingOptionsObject.duration ?? KlingDuration.FIVE
elementKlingOptions.cfg_scale = videoMessage.value.klingOptionsObject.cfg_scale ?? 0.5
elementVideoMessage.klingOptions = JSON.stringify(elementKlingOptions)
updateObject.klingOptions = elementVideoMessage.klingOptions
//
let res = await window.book.video.UpdateBookTaskDetailVideoMessage(
element.id,
updateObject
)
if (res.code !== 1) {
message.error(
t('批量应用当前设置失败,{error}', {
error: res.message
})
)
return
}
}
message.success(t('批量应用当前设置成功!'))
} catch (error) {
message.error(
t('批量应用当前设置失败,{error}', {
error: error.message
})
)
} finally {
softwareStore.spin.spinning = false
}
},
onNegativeClick: () => {
da.destroy()
message.info(t('取消操作'))
}
})
}
//
async function handleImageToVideo(type) {
if (isEmpty(videoMessage.value.imageUrl)) {
message.error(t('请输入图片链接'))
return
}
let res = await AddOneTask({
bookId: props.task.bookId,
type: type,
executeType: TaskExecuteType.AUTO,
bookTaskId: props.task.bookTaskId,
bookTaskDetailId: props.task.id,
messageName: DEFINE_STRING.BOOK.KLING_IMAGE_TO_VIDEO_RETURN
})
if (res.code != 1) {
message.error(res.message)
return
}
message.success(res.message)
}
async function handleSelectParentTask() {
showParentTaskModal.value = true
message.info(t('请选择一个已有的视频任务作为延长的基础'))
}
//
async function handleParentTaskSelection(selectedVideoInfo) {
try {
// Kling video_id
const currentKlingOptions = videoMessage.value.klingOptionsObject
const updatedKlingOptions = {
...currentKlingOptions,
video_id: selectedVideoInfo.videoId,
task_id: selectedVideoInfo.taskId
}
//
await handleKlingOptionsUpdate('video_id', selectedVideoInfo.taskId, updatedKlingOptions)
// modal
showParentTaskModal.value = false
message.success(
t('父任务选择成功视频ID已更新为: {videoId}', { videoId: selectedVideoInfo.taskId })
)
} catch (error) {
message.error(t('保存失败:{error}', { error: error.message }))
}
}
const activeTab = ref('image-to-video')
</script>

View File

@ -443,16 +443,43 @@
<!-- 执行按钮 --> <!-- 执行按钮 -->
<n-form-item> <n-form-item>
<n-button type="primary" size="small" :loading="loading" @click="handleExtend" block> <div style="display: flex; gap: 12px; width: 100%">
<template #icon> <TooltipButton
<n-icon> :tooltip="t('将当前转视频的基础设置批量应用到所有的分镜中')"
<svg viewBox="0 0 24 24"> type="default"
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" /> size="small"
</svg> :loading="loading"
</n-icon> @click="handleBatchSettings"
</template> style="width: 100px"
{{ t('执行视频拓展') }} >
</n-button> <template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M3,6V8H21V6H3M3,11V13H21V11H3M3,16V18H21V16H3Z" />
</svg>
</n-icon>
</template>
{{ t('应用设置') }}
</TooltipButton>
<n-button
type="primary"
size="small"
:loading="loading"
@click="handleExtend"
block
style="flex: 1"
>
<template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" />
</svg>
</n-icon>
</template>
{{ t('执行视频拓展') }}
</n-button>
</div>
</n-form-item> </n-form-item>
</n-space> </n-space>
</template> </template>
@ -469,8 +496,6 @@ import {
NTooltip, NTooltip,
NSwitch, NSwitch,
NAlert, NAlert,
NImage,
NText,
useMessage useMessage
} from 'naive-ui' } from 'naive-ui'
@ -511,13 +536,24 @@ const props = defineProps({
}) })
// emits // emits
const emit = defineEmits(['video-message-change', 'extend', 'select-parent-task', 'image-upload']) const emit = defineEmits([
'video-message-change',
'extend',
'select-parent-task',
'image-upload',
'batch-settings'
])
// videoMessage // videoMessage
function handleVideoMessageChange(key, value = undefined) { function handleVideoMessageChange(key, value = undefined) {
emit('video-message-change', key, value) emit('video-message-change', key, value)
} }
//
async function handleBatchSettings() {
emit('batch-settings', 'extend')
}
// //
async function handleExtend() { async function handleExtend() {
console.log('执行视频拓展', props.videoMessage, props.task) console.log('执行视频拓展', props.videoMessage, props.task)

View File

@ -8,7 +8,6 @@
:placeholder="t('请输入图片链接')" :placeholder="t('请输入图片链接')"
@change="handleVideoMessageChange('imageUrl')" @change="handleVideoMessageChange('imageUrl')"
size="small" size="small"
:disabled="loading"
class="image-input" class="image-input"
> >
<template #suffix> <template #suffix>
@ -17,7 +16,7 @@
:tooltip="t('上传图片到LaiTool图床获取图片链接')" :tooltip="t('上传图片到LaiTool图床获取图片链接')"
quaternary quaternary
@click="handleUploadImage(videoMessage.imageUrl, 'video', 'imageUrl')" @click="handleUploadImage(videoMessage.imageUrl, 'video', 'imageUrl')"
:disabled="loading"
> >
<template #icon> <template #icon>
<n-icon size="20"> <n-icon size="20">
@ -68,7 +67,6 @@
handleVideoMessageChange('endImageUrl', videoMessage.mjVideoOptionsObject.endImageUrl) handleVideoMessageChange('endImageUrl', videoMessage.mjVideoOptionsObject.endImageUrl)
" "
size="small" size="small"
:disabled="loading"
class="image-input" class="image-input"
> >
<template #suffix> <template #suffix>
@ -83,7 +81,6 @@
'endImageUrl' 'endImageUrl'
) )
" "
:disabled="loading"
> >
<template #icon> <template #icon>
<n-icon size="20"> <n-icon size="20">
@ -132,7 +129,6 @@
:placeholder="t('请输入 {data}', { data: t('提示词(可选)') })" :placeholder="t('请输入 {data}', { data: t('提示词(可选)') })"
size="small" size="small"
:autosize="{ minRows: 2, maxRows: 4 }" :autosize="{ minRows: 2, maxRows: 4 }"
:disabled="loading"
@change="handleVideoMessageChange('prompt')" @change="handleVideoMessageChange('prompt')"
/> />
</n-form-item> </n-form-item>
@ -196,7 +192,6 @@
:placeholder="t('选择视频类型')" :placeholder="t('选择视频类型')"
@update-value="(value) => handleVideoMessageChange('videoType', value)" @update-value="(value) => handleVideoMessageChange('videoType', value)"
size="small" size="small"
:disabled="loading"
class="motion-select" class="motion-select"
:style="{ width: '140px' }" :style="{ width: '140px' }"
/> />
@ -251,7 +246,6 @@
:placeholder="t('选择运动变化程度')" :placeholder="t('选择运动变化程度')"
@update-value="(value) => handleVideoMessageChange('motion', value)" @update-value="(value) => handleVideoMessageChange('motion', value)"
size="small" size="small"
:disabled="loading"
class="motion-select" class="motion-select"
/> />
</div> </div>
@ -310,7 +304,6 @@
:placeholder="t('选择批次数量')" :placeholder="t('选择批次数量')"
@update-value="(value) => handleVideoMessageChange('batchSize', value)" @update-value="(value) => handleVideoMessageChange('batchSize', value)"
size="small" size="small"
:disabled="loading"
class="motion-select" class="motion-select"
/> />
</div> </div>
@ -355,7 +348,6 @@
v-model:value="videoMessage.mjVideoOptionsObject.raw" v-model:value="videoMessage.mjVideoOptionsObject.raw"
size="small" size="small"
@update-value="handleVideoMessageChange('raw')" @update-value="handleVideoMessageChange('raw')"
:disabled="loading"
> >
<template #checked>{{ t('是') }}</template> <template #checked>{{ t('是') }}</template>
<template #unchecked>{{ t('否') }}</template> <template #unchecked>{{ t('否') }}</template>
@ -397,7 +389,6 @@
v-model:value="videoMessage.mjVideoOptionsObject.loop" v-model:value="videoMessage.mjVideoOptionsObject.loop"
size="small" size="small"
@update-value="handleVideoMessageChange('loop')" @update-value="handleVideoMessageChange('loop')"
:disabled="loading"
> >
<template #checked>{{ t('是') }}</template> <template #checked>{{ t('是') }}</template>
<template #unchecked>{{ t('否') }}</template> <template #unchecked>{{ t('否') }}</template>
@ -407,16 +398,40 @@
<!-- 生成按钮 --> <!-- 生成按钮 -->
<n-form-item> <n-form-item>
<n-button type="primary" size="small" :loading="loading" @click="handleImageToVideo" block> <div style="display: flex; gap: 12px; width: 100%">
<template #icon> <TooltipButton
<n-icon> :tooltip="t('将当前转视频的基础设置批量应用到所有的分镜中')"
<svg viewBox="0 0 24 24"> type="default"
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" /> size="small"
</svg> @click="handleBatchSettings"
</n-icon> style="width: 100px"
</template> >
{{ t('生成视频') }} <template #icon>
</n-button> <n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M3,6V8H21V6H3M3,11V13H21V11H3M3,16V18H21V16H3Z" />
</svg>
</n-icon>
</template>
{{ t('应用设置') }}
</TooltipButton>
<n-button
type="primary"
size="small"
@click="handleImageToVideo"
style="flex: 1"
>
<template #icon>
<n-icon>
<svg viewBox="0 0 24 24">
<path fill="currentColor" d="M8,5.14V19.14L19,12.14L8,5.14Z" />
</svg>
</n-icon>
</template>
{{ t('生成视频') }}
</n-button>
</div>
</n-form-item> </n-form-item>
</n-space> </n-space>
</template> </template>
@ -453,15 +468,16 @@ const props = defineProps({
videoMessage: { videoMessage: {
type: Object, type: Object,
required: true required: true
},
loading: {
type: Boolean,
default: false
} }
}) })
// emits // emits
const emit = defineEmits(['video-message-change', 'image-to-video', 'image-upload']) const emit = defineEmits([
'video-message-change',
'image-to-video',
'image-upload',
'batch-settings'
])
// //
function handleImageError(imageUrl) { function handleImageError(imageUrl) {
@ -492,6 +508,11 @@ function handleImageToVideo() {
emit('image-to-video') emit('image-to-video')
} }
//
async function handleBatchSettings() {
emit('batch-settings')
}
</script> </script>
<style scoped> <style scoped>

View File

@ -5,10 +5,10 @@
<n-tab-pane name="image-to-video" :tab="t('图生视频')"> <n-tab-pane name="image-to-video" :tab="t('图生视频')">
<ImageTextVideoInfoMJVideoImageToVideo <ImageTextVideoInfoMJVideoImageToVideo
:video-message="videoMessage" :video-message="videoMessage"
:loading="loading"
@video-message-change="handleVideoMessageChange" @video-message-change="handleVideoMessageChange"
@image-to-video="handleImageToVideo" @image-to-video="handleImageToVideo"
@image-upload="handleUploadImage" @image-upload="handleUploadImage"
@batch-settings="handleBatchSettings"
/> />
</n-tab-pane> </n-tab-pane>
@ -17,11 +17,11 @@
<DisabledWrapper :un-use="false"> <DisabledWrapper :un-use="false">
<ImageTextVideoInfoMJVideoExtend <ImageTextVideoInfoMJVideoExtend
:video-message="videoMessage" :video-message="videoMessage"
:loading="loading"
:task="props.task" :task="props.task"
@video-message-change="handleVideoMessageChange" @video-message-change="handleVideoMessageChange"
@select-parent-task="handleSelectParentTask" @select-parent-task="handleSelectParentTask"
@image-upload="handleUploadImage" @image-upload="handleUploadImage"
@batch-settings="handleBatchSettings"
/> />
</DisabledWrapper> </DisabledWrapper>
</n-tab-pane> </n-tab-pane>
@ -38,35 +38,38 @@
:content-style="{ padding: '8px 16px' }" :content-style="{ padding: '8px 16px' }"
:segmented="true" :segmented="true"
> >
<ImageTextVideoInfoMJVideoSelectParentTask <MediaToVideoSelectParentTask
:taskData="props.task" :taskData="props.task"
:videoList="subVideoPathObject" :videoList="subVideoPathObject"
@close="showParentTaskModal = false" @close="showParentTaskModal = false"
@save-selection="handleSaveSelection"
/> />
</n-modal> </n-modal>
</div> </div>
</template> </template>
<script setup> <script setup>
import { ref, computed } from 'vue' import { ref, computed, h } from 'vue'
import { NTabs, NTabPane, NModal, useMessage } from 'naive-ui' import { NTabs, NTabPane, NModal, useMessage, useDialog } from 'naive-ui'
import { ImageToVideoModels, MJVideoMotion } from '@/define/enum/video' import { ImageToVideoModels, MJVideoMotion, MJVideoType } from '@/define/enum/video'
import { ValidateJsonAndParse } from '@/define/Tools/validate' import { ValidateJsonAndParse } from '@/define/Tools/validate'
import { BookBackTaskType, TaskExecuteType } from '@/define/enum/bookEnum' import { BookBackTaskType, TaskExecuteType } from '@/define/enum/bookEnum'
import { DEFINE_STRING } from '@/define/ipcDefineString' import { DEFINE_STRING } from '@/define/ipcDefineString'
import { isEmpty } from 'lodash' import { isEmpty } from 'lodash'
import ImageTextVideoInfoMJVideoSelectParentTask from './MediaToVideoInfoMJVideoSelectParentTask.vue'
import ImageTextVideoInfoMJVideoImageToVideo from './MediaToVideoInfoMJVideoImageToVideo.vue' import ImageTextVideoInfoMJVideoImageToVideo from './MediaToVideoInfoMJVideoImageToVideo.vue'
import ImageTextVideoInfoMJVideoExtend from './MediaToVideoInfoMJVideoExtend.vue' import ImageTextVideoInfoMJVideoExtend from './MediaToVideoInfoMJVideoExtend.vue'
import DisabledWrapper from '@/renderer/src/components/common/DisabledWrapper.vue'
import { AddOneTask } from '@/renderer/src/common/task' import { AddOneTask } from '@/renderer/src/common/task'
import { t } from '@/i18n' import { t } from '@/i18n'
import { useSoftwareStore } from '@/renderer/src/stores' import { useSoftwareStore, useBookStore } from '@/renderer/src/stores'
import { useFile } from '@/renderer/src/hooks/useFile' import { useFile } from '@/renderer/src/hooks/useFile'
import MediaToVideoSelectParentTask from '../MediaToVideoSelectParentTask.vue'
const message = useMessage() const message = useMessage()
const dialog = useDialog() const dialog = useDialog()
const softwareStore = useSoftwareStore() const softwareStore = useSoftwareStore()
const bookStore = useBookStore()
const { UploadImageToLaiTool } = useFile() const { UploadImageToLaiTool } = useFile()
// props // props
@ -79,7 +82,6 @@ const props = defineProps({
// //
const activeTab = ref('image-to-video') const activeTab = ref('image-to-video')
const loading = ref(false)
const showParentTaskModal = ref(false) const showParentTaskModal = ref(false)
// //
@ -107,7 +109,12 @@ const videoMessage = computed(() => {
}) })
const subVideoPathObject = computed(() => { const subVideoPathObject = computed(() => {
return props.task?.subVideoPathObject || [] return props.task?.subVideoPathObject?.filter(
(video) =>
!isEmpty(video.localPath) &&
(video.type == ImageToVideoModels.MJ_VIDEO ||
video.type == ImageToVideoModels.MJ_VIDEO_EXTEND)
)
}) })
// //
@ -240,11 +247,42 @@ async function handleVideoMessageChange(key, value = undefined) {
) )
return return
} }
// tab
// props.task.videoMessage
if (!props.task.videoMessage) {
props.task.videoMessage = {}
}
Object.assign(props.task.videoMessage, updateObject)
// mjVideoOptions mjVideoOptionsObject
if (updateObject.mjVideoOptions) {
const updatedMjVideoOptions = ValidateJsonAndParse(updateObject.mjVideoOptions)
videoMessage.value.mjVideoOptionsObject = {
...videoMessage.value.mjVideoOptionsObject,
...updatedMjVideoOptions
}
}
} }
// //
async function handleUploadImage(filePath, type, property) { async function handleUploadImage(filePath, type, property) {
debugger const url = await UploadImageToLaiTool(filePath, type)
if (url) {
//
if (property == 'endImageUrl' || property == 'extendEndImageUrl') {
videoMessage.value.mjVideoOptionsObject[property] = url
} else {
videoMessage.value[property] = url
}
await handleVideoMessageChange(property, url)
}
}
//
async function handleBatchSettings(value) {
let da = dialog.warning({ let da = dialog.warning({
title: t('操作确认'), title: t('操作确认'),
content: () => content: () =>
@ -256,7 +294,7 @@ async function handleUploadImage(filePath, type, property) {
{ {
default: () => default: () =>
t( t(
'是否上传图片文件到LaiTool云端\n\n上传后会返回一个全球可分享的网络链接地址但是每日限制五十次上传。上传后的图片可用于MJ垫图转视频等功能。\n\n注意上传后的图片会再Laitool服务器留存若介意请勿上传。' '是否将当前分镜的设置批量应用到其余所有分镜?\n\n同步的设置视频类型SD/HD运动变化Motion批次数据Batch视频原始Raw首尾循环Loop \n\n批量应用后其余分镜的上述基础设置会被替换为当前分镜的数据是否继续'
) )
} }
), ),
@ -266,15 +304,51 @@ async function handleUploadImage(filePath, type, property) {
da?.destroy() da?.destroy()
try { try {
softwareStore.spin.spinning = true softwareStore.spin.spinning = true
softwareStore.spin.tip = t('开始上传文件...') softwareStore.spin.tip = t('正在批量应用当前设置...')
let url = await UploadImageToLaiTool(filePath, type) for (let i = 0; i < bookStore.selectBookTaskDetail.length; i++) {
if (property == 'endImageUrl' || property == 'extendEndImageUrl') const element = bookStore.selectBookTaskDetail[i]
videoMessage.value.mjVideoOptionsObject[property] = url
else videoMessage.value[property] = url let updateObject = {}
await handleVideoMessageChange(property, url)
//
let elementVideoMessage = element?.videoMessage || {}
let elementMjVideoOptionsString = elementVideoMessage.mjVideoOptions || '{}'
let elementMjVideoOptions = ValidateJsonAndParse(elementMjVideoOptionsString)
elementMjVideoOptions.videoType =
videoMessage.value.mjVideoOptionsObject.videoType ?? MJVideoType.HD
elementMjVideoOptions.motion =
videoMessage.value.mjVideoOptionsObject.motion ?? MJVideoMotion.Low
elementMjVideoOptions.batchSize = videoMessage.value.mjVideoOptionsObject.batchSize || 1
elementMjVideoOptions.raw = videoMessage.value.mjVideoOptionsObject.raw ?? true
if (value != 'extend') {
elementMjVideoOptions.loop = videoMessage.value.mjVideoOptionsObject.loop ?? false
}
elementVideoMessage.mjVideoOptions = JSON.stringify(elementMjVideoOptions)
updateObject.mjVideoOptions = elementVideoMessage.mjVideoOptions
//
let res = await window.book.video.UpdateBookTaskDetailVideoMessage(
element.id,
updateObject
)
if (res.code !== 1) {
message.error(
t('批量应用当前设置失败,{error}', {
error: res.message
})
)
return
}
}
message.success(t('批量应用当前设置成功!'))
} catch (error) { } catch (error) {
message.error(error.message) message.error(
t('批量应用当前设置失败,{error}', {
error: error.message
})
)
} finally { } finally {
softwareStore.spin.spinning = false softwareStore.spin.spinning = false
} }
@ -284,6 +358,38 @@ async function handleUploadImage(filePath, type, property) {
} }
}) })
} }
//
async function handleSaveSelection(currentVideoInfo) {
let taskId = currentVideoInfo.taskId || ''
let videoIndex = currentVideoInfo.index
if (isEmpty(taskId) || videoIndex == null) {
message.error(t('当前选中的视频的 taskId 或 videoIndex 为空,请检查视频信息'))
return
}
//
let updateObject = {
mjVideoOptions: JSON.stringify({
...videoMessage.value.mjVideoOptionsObject,
taskId: taskId,
index: videoIndex
})
}
let res = await window.book.video.UpdateBookTaskDetailVideoMessage(props.task.id, updateObject)
//
if (res.code == 1) {
message.success(t('保存成功'))
videoMessage.value.mjVideoOptionsObject.taskId = taskId
videoMessage.value.mjVideoOptionsObject.index = videoIndex
} else {
message.error(t('保存失败:{error}', { error: res.message }))
return
}
}
</script> </script>
<style scoped> <style scoped>

View File

@ -122,7 +122,7 @@ const progress = computed(() => {
if (videoType == ImageToVideoModels.MJ_VIDEO) { if (videoType == ImageToVideoModels.MJ_VIDEO) {
p = messageObject.progress ?? '0%' p = messageObject.progress ?? '0%'
} else { } else {
p = '0%' p = undefined
} }
return p return p
}) })

View File

@ -374,18 +374,9 @@ const props = defineProps({
} }
}) })
const videoList = computed(() => { const emit = defineEmits(['save-selection'])
console.log('未过滤的当前视频列表', props.videoList)
//
return props.videoList.filter(
(video) =>
!isEmpty(video.localPath) &&
(video.type == ImageToVideoModels.MJ_VIDEO ||
video.type == ImageToVideoModels.MJ_VIDEO_EXTEND)
)
})
const videoMessage = ref({}) const videoMessage = ref({})
const videoList = computed(() => props.videoList || [])
const message = useMessage() const message = useMessage()
@ -445,41 +436,9 @@ async function handleSaveMJVideoTaskSelection() {
message.error(t('请选择一个视频')) message.error(t('请选择一个视频'))
return return
} }
emit('save-selection', {
let taskId = currentVideoInfo.value.taskId ...toRaw(currentVideoInfo.value)
let videoIndex = currentVideoInfo.value.index })
console.log('当前选中的视频信息', taskId, videoIndex, currentSelectedVideo.value)
if (isEmpty(taskId) || videoIndex == null) {
message.error(t('当前选中的视频的 taskId 或 videoIndex 为空,请检查视频信息'))
return
}
//
let updateObject = {
mjVideoOptions: JSON.stringify({
...videoMessage.value.mjVideoOptionsObject,
taskId: taskId,
index: videoIndex
})
}
console.log('更新的实际数据', updateObject)
let res = await window.book.video.UpdateBookTaskDetailVideoMessage(
props.taskData.id,
updateObject
)
//
if (res.code == 1) {
message.success(t('保存成功'))
videoMessage.value.mjVideoOptionsObject.taskId = taskId
videoMessage.value.mjVideoOptionsObject.index = videoIndex
} else {
message.error(t('保存失败:{error}', { error: res.message }))
return
}
} catch (error) { } catch (error) {
message.error(t('保存失败:{error}', { error: error.message })) message.error(t('保存失败:{error}', { error: error.message }))
} }

View File

@ -134,57 +134,71 @@ onMounted(async () => {
onUnmounted(() => { onUnmounted(() => {
// //
window.system.removeEventListen([DEFINE_STRING.BOOK.MJ_VIDEO_TO_VIDEO_RETURN]) window.system.removeEventListen([DEFINE_STRING.BOOK.MJ_VIDEO_TO_VIDEO_RETURN])
window.system.removeEventListen(DEFINE_STRING.BOOK.KLING_IMAGE_TO_VIDEO_RETURN)
}) })
//
function handleMessageChange(videoMessage, id) {
let findIndex = bookStore.selectBookTaskDetail.findIndex((item) => item.id === id)
if (findIndex !== -1) {
bookStore.selectBookTaskDetail[findIndex].videoMessage.status = videoMessage.status
bookStore.selectBookTaskDetail[findIndex].videoMessage.taskId = videoMessage.taskId
bookStore.selectBookTaskDetail[findIndex].videoMessage.msg = videoMessage.msg
bookStore.selectBookTaskDetail[findIndex].videoMessage.messageData = videoMessage.messageData
}
}
function handleEventReceive(value) {
try {
if (value.type == ResponseMessageType.MJ_VIDEO) {
//
let videoMessage = JSON.parse(value.data)
console.log('收到 mj video视频处理进度', videoMessage)
handleMessageChange(videoMessage, value.id)
} else if (value.type == ResponseMessageType.MJ_VIDEO_EXTEND) {
//
let videoMessage = JSON.parse(value.data)
console.log('收到 mj video extend 视频处理进度', videoMessage)
handleMessageChange(videoMessage, value.id)
} else if (value.type == ResponseMessageType.KLING_VIDEO) {
let videoMessage = JSON.parse(value.data)
console.log('收到 Kling video extend 视频处理进度', videoMessage)
handleMessageChange(videoMessage, value.id)
} else if (value.type == ResponseMessageType.KLING_VIDEO_EXTEND) {
let videoMessage = JSON.parse(value.data)
console.log('收到 Kling video extend 视频处理进度', videoMessage)
handleMessageChange(videoMessage, value.id)
} else if (value.type == ResponseMessageType.VIDEO_SUCESS) {
//
let bookTaskDetail = JSON.parse(value.data)
console.log('视频处理完成', bookTaskDetail)
let findIndex = bookStore.selectBookTaskDetail.findIndex(
(item) => item.id === bookTaskDetail.id
)
if (findIndex !== -1) {
bookStore.selectBookTaskDetail[findIndex] = bookTaskDetail
}
} else {
console.error('未知的返回类型', value.type, value)
let findIndex = bookStore.selectBookTaskDetail.findIndex((item) => item.id === value.id)
if (findIndex != -1) {
bookStore.selectBookTaskDetail[findIndex].videoMessage.status = VideoStatus.FAIL
bookStore.selectBookTaskDetail[findIndex].videoMessage.msg = value.message
}
}
} catch (error) {
message.error(t('处理数据时发生错误,{error}', { error: error.message }))
}
}
function handleIpcTaskListChange() { function handleIpcTaskListChange() {
// SD // SD
window.system.setEventListen([DEFINE_STRING.BOOK.MJ_VIDEO_TO_VIDEO_RETURN], (value) => { window.system.setEventListen([DEFINE_STRING.BOOK.MJ_VIDEO_TO_VIDEO_RETURN], (value) => {
try { handleEventReceive(value)
if (value.type == ResponseMessageType.MJ_VIDEO) { })
//
let videoMessage = JSON.parse(value.data) window.system.setEventListen(DEFINE_STRING.BOOK.KLING_IMAGE_TO_VIDEO_RETURN, (value) => {
console.log('收到 mj video视频处理进度', videoMessage) handleEventReceive(value)
let findIndex = bookStore.selectBookTaskDetail.findIndex((item) => item.id === value.id)
if (findIndex !== -1) {
bookStore.selectBookTaskDetail[findIndex].videoMessage.status = videoMessage.status
bookStore.selectBookTaskDetail[findIndex].videoMessage.taskId = videoMessage.taskId
bookStore.selectBookTaskDetail[findIndex].videoMessage.msg = videoMessage.msg
bookStore.selectBookTaskDetail[findIndex].videoMessage.messageData =
videoMessage.messageData
}
} else if (value.type == ResponseMessageType.MJ_VIDEO_EXTEND) {
//
let videoMessage = JSON.parse(value.data)
console.log('收到 mj video extend 视频处理进度', videoMessage)
let findIndex = bookStore.selectBookTaskDetail.findIndex((item) => item.id === value.id)
if (findIndex !== -1) {
bookStore.selectBookTaskDetail[findIndex].videoMessage.status = videoMessage.status
bookStore.selectBookTaskDetail[findIndex].videoMessage.taskId = videoMessage.taskId
bookStore.selectBookTaskDetail[findIndex].videoMessage.msg = videoMessage.msg
bookStore.selectBookTaskDetail[findIndex].videoMessage.messageData =
videoMessage.messageData
}
} else if (value.type == ResponseMessageType.VIDEO_SUCESS) {
//
let bookTaskDetail = JSON.parse(value.data)
console.log('视频处理完成', bookTaskDetail)
let findIndex = bookStore.selectBookTaskDetail.findIndex(
(item) => item.id === bookTaskDetail.id
)
if (findIndex !== -1) {
bookStore.selectBookTaskDetail[findIndex] = bookTaskDetail
}
} else {
console.error('未知的返回类型', value.type, value)
let findIndex = bookStore.selectBookTaskDetail.findIndex((item) => item.id === value.id)
if (findIndex != -1) {
bookStore.selectBookTaskDetail[findIndex].videoMessage.status = VideoStatus.FAIL
bookStore.selectBookTaskDetail[findIndex].videoMessage.msg = value.message
}
}
} catch (error) {
message.error(t('处理数据时发生错误,{error}', { error: error.message }))
}
}) })
} }

View File

@ -27,6 +27,7 @@ import DataTableAction from './DataTableAction.vue'
import DatatableHeaderImage from './DatatableHeaderImage.vue' import DatatableHeaderImage from './DatatableHeaderImage.vue'
import DatatableHeaderCharacter from './DatatableHeaderCharacter.vue' import DatatableHeaderCharacter from './DatatableHeaderCharacter.vue'
import { t } from '@/i18n' import { t } from '@/i18n'
import DatatableHeaderGptPrompt from './DatatableHeaderGptPrompt.vue'
const bookStore = useBookStore() const bookStore = useBookStore()
const softwareStore = useSoftwareStore() const softwareStore = useSoftwareStore()
@ -94,7 +95,7 @@ const columns = computed(() => {
// title(row) { // title(row) {
// return h(ODataTableHeaderGptPrompt) // return h(ODataTableHeaderGptPrompt)
// }, // },
title: t('提示词'), title: () => h(DatatableHeaderGptPrompt, { bookTask: bookStore.selectBookTask }),
key: 'gpt_prompt', key: 'gpt_prompt',
className: 'empty-margin', className: 'empty-margin',
@ -115,9 +116,6 @@ const columns = computed(() => {
width: '300', width: '300',
minWidth: softwareStore.showCompletePrompt ? 300 : 0, minWidth: softwareStore.showCompletePrompt ? 300 : 0,
className: 'empty-margin' className: 'empty-margin'
// render(row, index) {
// return h(ODatatablePrompt, { initData: row, index: index })
// }
}, },
{ {
// //

View File

@ -0,0 +1,115 @@
<template>
<div style="display: flex">
<span style="margin-right: 5px">{{ t('提示词') }}</span>
<TooltipButton
size="tiny"
tertiary
type="primary"
:style="{ marginLeft: '10px' }"
@click="showModal = true"
>
{{ t('通用前/后缀') }}
</TooltipButton>
<n-modal
preset="dialog"
:mask-closable="false"
title="提示词前后缀设置"
v-model:show="showModal"
:show-icon="false"
>
<div style="padding: 16px 0">
<n-space vertical size="large">
<n-form v-model="data" label-width="80px">
<n-form-item :label="t('通用前缀')">
<n-input
v-model:value="data.prefix"
:placeholder="
t('请输入 {data}', {
data: t('通用前缀')
})
"
clearable
/>
</n-form-item>
<n-form-item :label="t('通用后缀')">
<n-input
v-model:value="data.suffix"
:placeholder="
t('请输入 {data}', {
data: t('通用后缀')
})
"
clearable
/>
</n-form-item>
</n-form>
<n-space justify="end">
<n-button @click="handleCancel">{{ t('取消') }}</n-button>
<n-button type="primary" @click="handleSave">{{ t('保存') }}</n-button>
</n-space>
</n-space>
</div>
</n-modal>
</div>
</template>
<script setup>
import { TimeDelay } from '@/define/Tools/time'
import { t } from '@/i18n'
import { isEmpty } from 'lodash'
const message = useMessage()
const props = defineProps({
bookTask: {
type: Object,
required: true
}
})
const data = ref({
prefix: '',
suffix: ''
})
const showModal = ref(false)
//
watch(showModal, (newVal) => {
if (newVal) {
data.value.prefix = props.bookTask?.prefixPrompt || ''
data.value.suffix = props.bookTask?.suffixPrompt || ''
}
})
const handleSave = async () => {
let updateObject = {}
if (!isEmpty(data.value.prefix)) {
updateObject['prefixPrompt'] = data.value.prefix
}
if (!isEmpty(data.value.suffix)) {
updateObject['suffixPrompt'] = data.value.suffix
}
//
let res = await window.book.ModifyBookTaskDataById(props.bookTask.id, updateObject)
if (res.code != 1) {
message.error(t('保存失败:{error}', { error: res.message }))
return
}
message.success(t('保存成功'))
await TimeDelay(300)
//
showModal.value = false
}
const handleCancel = () => {
//
showModal.value = false
message.info(t('取消操作'))
}
</script>

View File

@ -184,6 +184,14 @@ const promptOptions = ref([
{ {
label: '3. ' + t('重置所有提示词'), label: '3. ' + t('重置所有提示词'),
key: 'reset-all-prompt' key: 'reset-all-prompt'
},
{
type: 'divider',
key: 'd5_2'
},
{
label: '4. ' + t('导入提示词'),
key: 'import-prompt'
} }
]) ])
@ -583,6 +591,78 @@ async function handleResetAllPrompt() {
}) })
} }
//
async function handleImportPrompt() {
let da = dialog.warning({
title: t('操作确认'),
content: () =>
h(
'div',
{ style: { whiteSpace: 'pre-line' } },
{
default: () =>
`该操作会选择 TXT 文件进行导入提示词,\n\n提示词文件格式要求\n每行一个提示词顺序和当前分镜顺序一致\n如果某个分镜不需要导入提示词可以留空该行\n超出分镜的提示词会被删除不足则只导入文本中有的提示词数据\n\n是否继续`
}
),
negativeText: t('取消'),
positiveText: t('继续'),
onPositiveClick: async () => {
da?.destroy()
await TimeDelay(200)
try {
//
let fileRes = await window.system.SelectSingleFile(['txt'])
if (fileRes.code == 0) {
throw new Error(fileRes.message)
}
let filePath = fileRes.data
//
let fileContentRes = await window.system.ReadTextFile(filePath)
if (fileContentRes.code == 0) {
throw new Error(fileContentRes.message)
}
let fileContent = fileContentRes.data.content
if (fileContent == null || fileContent == undefined || isEmpty(fileContent.trim())) {
throw new Error(t('导入的提示词文件内容为空'))
}
let lines = fileContent.split(/\r?\n/).map((line) => line.trim())
//
for (let i = 0; i < lines.length && i < bookStore.selectBookTaskDetail.length; i++) {
const element = lines[i]
//
let res = await window.book.ModifyBookTaskDetailById(
bookStore.selectBookTaskDetail[i].id,
{
gptPrompt: element
}
)
if (res.code != 1) {
throw new Error(
t('导入第 {line} 行提示词失败,{error}', { line: i + 1, error: res.message })
)
}
//
bookStore.selectBookTaskDetail[i].gptPrompt = element
}
message.success(t('导入提示词成功'))
} catch (error) {
softwareStore.spin.spinning = false
message.error(t('导入提示词失败,{error}', { error: error.message }))
}
},
onNegativeClick: () => {
message.info(t('取消操作'))
},
closable: true,
maskClosable: false
})
}
// //
async function handlePromptSelect(key) { async function handlePromptSelect(key) {
if (key === 'original-get-all-ai-prompt') { if (key === 'original-get-all-ai-prompt') {
@ -597,6 +677,8 @@ async function handlePromptSelect(key) {
) )
} else if (key == 'reset-all-prompt') { } else if (key == 'reset-all-prompt') {
await handleResetAllPrompt() await handleResetAllPrompt()
} else if (key == 'import-prompt') {
await handleImportPrompt()
} else { } else {
message.error(t('未知一键推理操作')) message.error(t('未知一键推理操作'))
} }

View File

@ -0,0 +1,480 @@
<template>
<n-space vertical :size="12">
<!-- 验证错误提示 -->
<n-alert v-if="hasValidationErrors" type="error" closable>
<template #header>{{ t('配置验证失败') }}</template>
<div>
<div v-for="(error, index) in validationErrors" :key="index" style="margin-bottom: 4px">
{{ error }}
</div>
</div>
</n-alert>
<!-- 循环渲染所有配置选项 -->
<template v-if="!hasValidationErrors">
<n-form-item
v-for="option in options"
:key="option.key"
:show-label="false"
:show-require-mark="false"
:show-feedback="false"
>
<div :class="['config-control', { 'full-width': option.fullWidth }]">
<div class="config-label">
<span>{{ option.label }}</span>
<!-- 帮助提示 -->
<n-tooltip
v-if="option.tooltip"
trigger="hover"
placement="top"
style="max-width: 300px"
>
<template #trigger>
<n-button size="small" quaternary circle>
<template #icon>
<n-icon>
<HelpCircleOutline />
</n-icon>
</template>
</n-button>
</template>
<div style="line-height: 1.5" v-html="option.tooltip"></div>
</n-tooltip>
</div>
<!-- 根据类型渲染不同的控件 -->
<!-- 下拉选择器 -->
<n-select
v-if="option.type === 'select'"
:value="getOptionValue(option.key)"
:options="option.options"
:placeholder="
option.placeholder ||
t('请选择 {data}', {
data: option.label
})
"
@update-value="(value) => handleOptionChange(option.key, value)"
size="small"
:disabled="option.disabled || false"
:style="{ width: option.fullWidth ? '100%' : option.width || '140px' }"
/>
<!-- 开关 -->
<n-switch
v-else-if="option.type === 'switch'"
:value="getOptionValue(option.key)"
size="small"
@update-value="(value) => handleOptionChange(option.key, value)"
:disabled="option.disabled || false"
>
<template #checked>{{ option.checkedText || t('是') }}</template>
<template #unchecked>{{ option.uncheckedText || t('否') }}</template>
</n-switch>
<!-- 数字输入 -->
<n-input-number
v-else-if="option.type === 'number'"
:value="getOptionValue(option.key)"
:show-button="false"
:step="option.step || 1"
:min="option.min"
:max="option.max"
:placeholder="option.placeholder"
@update-value="(value) => handleOptionChange(option.key, value)"
size="small"
:disabled="option.disabled || false"
:style="{ width: option.fullWidth ? '100%' : option.width || '100px' }"
/>
<!-- 文本输入 -->
<div
v-else-if="option.type === 'input'"
class="form-item-with-suffix-button"
:style="{ width: option.fullWidth ? '100%' : 'auto' }"
>
<n-input
:value="getOptionValue(option.key)"
:type="option.inputType || 'text'"
:placeholder="option.placeholder"
:autosize="option.inputType === 'textarea' ? option.autosize : undefined"
@update-value="(value) => handleOptionChange(option.key, value)"
size="small"
:disabled="option.disabled || false"
/>
<TooltipButton
v-if="option.suffixButton"
:type="option.suffixButton.type || 'primary'"
:size="option.suffixButton.size || 'small'"
:quaternary="option.suffixButton.quaternary ?? false"
:circle="option.suffixButton.circle ?? false"
:disabled="option.disabled || option.suffixButton.disabled || false"
ghost
:style="{ width: option.suffixButton.width || '80px' }"
:tooltip="option.suffixButton.tooltip"
@click="() => handleSuffixButtonClick(option)"
>
<template #icon v-if="option.suffixButton.icon">
<n-icon :size="option.suffixButton.iconSize || 16">
<component :is="option.suffixButton.icon" />
</n-icon>
</template>
{{ option.suffixButton.text || '' }}
</TooltipButton>
</div>
<!-- 图片输入 -->
<div
v-else-if="option.type === 'image'"
class="input-with-preview"
:style="{ width: option.fullWidth ? '100%' : 'auto' }"
>
<n-input
:value="getOptionValue(option.key)"
:placeholder="option.placeholder || t('请输入图片链接')"
@update-value="(value) => handleOptionChange(option.key, value)"
size="small"
:disabled="option.disabled || false"
class="image-input"
>
<template #suffix>
<TooltipButton
v-if="option.onUpload"
size="tiny"
quaternary
@click="() => handleImageUpload(option)"
:disabled="option.disabled || false"
:tooltip="option.uploadTooltip || t('上传图片到LaiTool图床获取图片链接')"
>
<template #icon>
<n-icon size="20">
<svg
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
viewBox="0 0 24 24"
>
<g
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="M4 17v2a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2v-2"></path>
<path d="M7 9l5-5l5 5"></path>
<path d="M12 4v12"></path>
</g>
</svg>
</n-icon>
</template>
</TooltipButton>
</template>
</n-input>
<n-image
v-if="!isEmpty(getOptionValue(option.key))"
:src="getOptionValue(option.key)"
:height="option.previewHeight || 60"
:fallback-src="'data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTAwIiBoZWlnaHQ9IjEwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KICA8cmVjdCB3aWR0aD0iMTAwIiBoZWlnaHQ9IjEwMCIgZmlsbD0iI2Y1ZjVmNSIvPgogIDx0ZXh0IHg9IjUwIiB5PSI1MCIgZm9udC1mYW1pbHk9IkFyaWFsLCBzYW5zLXNlcmlmIiBmb250LXNpemU9IjEyIiBmaWxsPSIjOTk5IiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiBkeT0iMC4zZW0iPuWbvueJh+WKoOi9veWksei0pTwvdGV4dD4KPC9zdmc+'"
object-fit="contain"
class="preview-image"
@error="() => handleImageError(option.key)"
/>
</div>
</div>
</n-form-item>
</template>
</n-space>
</template>
<script setup>
import {
NSpace,
NFormItem,
NSelect,
NSwitch,
NInputNumber,
NInput,
NButton,
NIcon,
NTooltip,
NAlert,
NImage} from 'naive-ui'
import { HelpCircleOutline } from '@vicons/ionicons5'
import { computed } from 'vue'
import { t } from '@/i18n'
import { isEmpty } from 'lodash'
import TooltipButton from './TooltipButton.vue'
// props
const props = defineProps({
//
options: {
type: Array,
required: true,
default: () => []
/*
选项格式示例
[
{
key: 'videoType', //
label: '视频类型 (SD/HD)', //
type: 'select', // : select, switch, number, input, image
options: [...], // select
placeholder: '请选择', //
tooltip: '帮助说明文本', // HTML
width: '140px', //
//
step: 0.1, // number
min: 0, // number
max: 10, // number
inputType: 'textarea', // input: text, textarea
autosize: { minRows: 2 }, // textarea
checkedText: '开启', // switch
uncheckedText: '关闭', // switch
// image
onUpload: (currentValue, key) => {}, //
uploadTooltip: '上传图片', //
previewHeight: 60, //
previewText: '图片预览', //
fullWidth: false, //
disabled: false, //
suffixButton: {
text: string, //
icon: Component, //
iconSize: number, //
type: string, // (primary, secondary)
quaternary: boolean, //
circle: boolean, //
disabled: boolean, //
tooltip: string, //
width: string, //
onClick: Function //
}
}
]
*/
},
//
value: {
type: Object,
default: () => ({})
}
})
// emits
const emit = defineEmits([
'update:value',
'change',
'image-upload',
'image-error',
'suffix-button-click'
])
//
const validationErrors = computed(() => {
const errors = []
props.options.forEach((option, index) => {
if (!option.key) {
errors.push(`选项 ${index + 1}: 缺少必需的 key 属性`)
}
if (!option.label) {
errors.push(`选项 ${index + 1} (${option.key || '未知'}): 缺少必需的 label 属性`)
}
if (!option.type) {
errors.push(`选项 ${index + 1} (${option.key || '未知'}): 缺少必需的 type 属性`)
} else {
//
switch (option.type) {
case 'select':
if (!option.options || !Array.isArray(option.options) || option.options.length === 0) {
errors.push(`选项 ${index + 1} (${option.key}): select 类型需要 options 数组且不能为空`)
} else {
// options
option.options.forEach((opt, optIndex) => {
if (
typeof opt !== 'object' ||
!opt.hasOwnProperty('value') ||
!opt.hasOwnProperty('label')
) {
errors.push(
`选项 ${index + 1} (${option.key}): options[${optIndex}] 必须包含 value 和 label 属性`
)
}
})
}
break
case 'number':
if (option.min !== undefined && option.max !== undefined && option.min > option.max) {
errors.push(`选项 ${index + 1} (${option.key}): min 值不能大于 max 值`)
}
if (option.step !== undefined && option.step <= 0) {
errors.push(`选项 ${index + 1} (${option.key}): step 值必须大于 0`)
}
break
case 'input':
if (option.inputType === 'textarea' && option.autosize) {
if (typeof option.autosize !== 'object') {
errors.push(`选项 ${index + 1} (${option.key}): textarea 的 autosize 必须是对象`)
} else if (
option.autosize.minRows !== undefined &&
option.autosize.maxRows !== undefined
) {
if (option.autosize.minRows > option.autosize.maxRows) {
errors.push(`选项 ${index + 1} (${option.key}): autosize.minRows 不能大于 maxRows`)
}
}
}
break
case 'switch':
// switch
break
case 'image':
// image
if (option.onUpload && typeof option.onUpload !== 'function') {
errors.push(`选项 ${index + 1} (${option.key}): onUpload 必须是一个函数`)
}
if (
option.previewHeight !== undefined &&
(typeof option.previewHeight !== 'number' || option.previewHeight <= 0)
) {
errors.push(`选项 ${index + 1} (${option.key}): previewHeight 必须是正数`)
}
break
default:
errors.push(`选项 ${index + 1} (${option.key}): 不支持的 type 类型 "${option.type}"`)
}
}
})
return errors
})
//
const hasValidationErrors = computed(() => validationErrors.value.length > 0)
//
function getOptionValue(key) {
return props.value[key]
}
//
function handleOptionChange(key, value) {
//
if (props.value && typeof props.value === 'object') {
props.value[key] = value
}
//
emit('change', key, value, props.value)
}
//
function handleImageUpload(option) {
const currentValue = getOptionValue(option.key)
if (option.onUpload && typeof option.onUpload === 'function') {
option.onUpload(option.key, currentValue)
} else {
//
emit('image-upload', currentValue, option.key, option)
}
}
//
function handleImageError(key) {
emit('image-error', key, getOptionValue(key))
}
// suffix
function handleSuffixButtonClick(option) {
const currentValue = getOptionValue(option.key)
if (option.suffixButton?.onClick && typeof option.suffixButton.onClick === 'function') {
option.suffixButton.onClick(option.key, currentValue, option)
} else {
//
emit('suffix-button-click', option.key, currentValue, option)
}
}
</script>
<style scoped>
.config-control {
display: flex;
align-items: center;
justify-content: space-between;
width: 100%;
}
.config-control.full-width {
flex-direction: column;
align-items: flex-start;
gap: 8px;
}
.config-label {
display: flex;
align-items: center;
justify-self: center;
gap: 6px;
font-size: 14px;
}
/* 图片输入相关样式 */
.input-with-preview {
display: flex;
align-items: center;
gap: 12px;
width: 100%;
}
.image-input {
flex: 1;
}
.preview-image {
width: auto;
height: 100%;
max-width: 100%;
object-fit: contain;
}
.preview-placeholder {
display: flex;
align-items: center;
justify-content: center;
width: 60px;
height: 60px;
border: 1px solid #e0e0e6;
background-color: #fafafa;
border-radius: 4px;
}
.placeholder-text {
font-size: 12px;
}
.form-item-with-suffix-button {
display: flex;
align-items: stretch;
gap: 8px;
}
.form-item-with-suffix-button .n-input {
flex: 1;
}
.form-item-with-suffix-button .n-button {
flex-shrink: 0;
min-width: 80px;
align-self: stretch;
}
</style>

View File

@ -0,0 +1,12 @@
<template>
<svg
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
viewBox="0 0 512 512"
>
<path
d="M428.8 137.6h-86.177a115.52 115.52 0 0 0 2.176-22.4c0-47.914-35.072-83.2-92-83.2c-45.314 0-57.002 48.537-75.707 78.784c-7.735 12.413-16.994 23.317-25.851 33.253l-.131.146l-.129.148C135.662 161.807 127.764 168 120.8 168h-2.679c-5.747-4.952-13.536-8-22.12-8H32c-17.673 0-32 12.894-32 28.8v230.4C0 435.106 14.327 448 32 448h64c8.584 0 16.373-3.048 22.12-8h2.679c28.688 0 67.137 40 127.2 40h21.299c62.542 0 98.8-38.658 99.94-91.145c12.482-17.813 18.491-40.785 15.985-62.791A93.148 93.148 0 0 0 393.152 304H428.8c45.435 0 83.2-37.584 83.2-83.2c0-45.099-38.101-83.2-83.2-83.2zm0 118.4h-91.026c12.837 14.669 14.415 42.825-4.95 61.05c11.227 19.646 1.687 45.624-12.925 53.625c6.524 39.128-10.076 61.325-50.6 61.325H248c-45.491 0-77.21-35.913-120-39.676V215.571c25.239-2.964 42.966-21.222 59.075-39.596c11.275-12.65 21.725-25.3 30.799-39.875C232.355 112.712 244.006 80 252.8 80c23.375 0 44 8.8 44 35.2c0 35.2-26.4 53.075-26.4 70.4h158.4c18.425 0 35.2 16.5 35.2 35.2c0 18.975-16.225 35.2-35.2 35.2zM88 384c0 13.255-10.745 24-24 24s-24-10.745-24-24s10.745-24 24-24s24 10.745 24 24z"
fill="currentColor"
></path>
</svg>
</template>

View File

@ -1,20 +1,73 @@
import { useMessage } from 'naive-ui' import { t } from '@/i18n'
import { useMessage, useDialog } from 'naive-ui'
import { useSoftwareStore } from '@renderer/stores'
import { h } from 'vue'
export function useFile() { export function useFile() {
const message = useMessage() const message = useMessage()
const dialog = useDialog()
const softwareStore = useSoftwareStore()
async function UploadImageToLaiTool(imagePath: string, type: "video" | "image") { /**
* LaiTool
* @param imagePath
* @param type
* @returns Promise<string | null> URL null
*/
async function UploadImageToLaiTool(
imagePath: string,
type: "video" | "image"
): Promise<string | null> {
return new Promise((resolve) => {
let da = dialog.warning({
title: t('操作确认'),
content: () =>
h(
'div',
{
style: { whiteSpace: 'pre-line' }
},
{
default: () =>
t(
'是否上传图片文件到LaiTool云端\n\n上传后会返回一个全球可分享的网络链接地址但是每日限制五十次上传。上传后的图片可用于MJ垫图转视频等功能。\n\n注意上传后的图片会再Laitool服务器留存若介意请勿上传。'
)
}
),
positiveText: t('继续'),
negativeText: t('取消'),
onPositiveClick: async () => {
da?.destroy()
try {
softwareStore.spin.spinning = true
softwareStore.spin.tip = t('开始上传文件...')
// 开始上传 let res = await window.system.UploadImageToLaiTool(imagePath, type)
let res = await window.system.UploadImageToLaiTool(imagePath, type)
if (res.code != 1) {
message.error(res.message)
return;
}
let url = res.data.url;
return url;
if (res.code !== 1) {
message.error(res.message || t('上传失败'))
resolve(null)
return
}
const url = res.data.url
resolve(url)
} catch (error) {
const err = error as Error
message.error(err.message)
resolve(null)
} finally {
softwareStore.spin.spinning = false
}
},
onNegativeClick: () => {
da?.destroy()
message.info(t('取消操作'))
resolve(null) // 用户取消也返回 null
}
})
})
} }
return { return {