幻能边境解包求助

虽然是预告但是日本的朋友告诉我dmm上面已经有一段时间了
看了一下在线的确定是cocos2d js
资源文件名也是经过base64加密的uuid,不过和正常的cocos2djs不一样他的uuid后面还有一段字符串,想起纯爱航线可能也有类似加盐的操作,望有大佬能分析一波。
E服预告链接:https://www.ero-labs.com/cn/prereg_iframe.html?id=44&hgame_id=161
dmm在线链接:Now loading...

image
image

image

相关链接


可能的资源列表
https://eowgame.jcbgame.com/eow-jp-game/game/assets/main/native/79/798d3e88-bbb2-4625-a360-41658fecdafb.f3300.manifest

资源URL例子(PNG)
https://eowgame.jcbgame.com/eow-jp-game/game/assets/resources/native/48/482373cd-94be-48c4-b78b-94a392772664.5cf9a.png

import的json文件URL
https://eowgame.jcbgame.com/eow-jp-game/game/assets/resources/import/48/[email protected]

当然现在只是先了解一下文件的关系,具体的还得等E服出来dmm的为了保护版权要符合日法有码

我说怎么眼熟,这不是我之前分享的圣战残响X吗

uuid后面是md5,跟放置少女那个一样

有点理解又有点不理解。。。

{"size":290105,"md5":"566ed685fa52fd282294ba6c0f298b82"},"assets/resources/native/47/473b0fc0-e89a-49e8-a52d-0b60ec511f4e.png":{"size":7502,"md5":"60d3bbb40b9ba4f07e4e604ecf494579"},"assets/resources/native/48/482373cd-94be-48c4-b78b-94a392772664.png":{"size":28643,"md5":"5cf9a9b09da59b55ade0dda277672370"},"assets/resources/native/49/4901f920-56b2-4118-a409-a476c515f890.png":


assets/resources/native/48/482373cd-94be-48c4-b78b-94a392772664.5cf9a.png
在manifest中对应的md5是60d3bbb40b9ba4f07e4e604ecf494579
但是这个md5是这个文件manifest列表的下一个文件的md5即
{“size”:28643,“md5”:“5cf9a9b09da59b55ade0dda277672370”},“assets/resources/native/49/4901f920-56b2-4118-a409-a476c515f890.png”:

似乎manifest里所有的文件都是这个规律

后面那5字节hash是版本, 只需要看config的version项

“assets/resources/native/48/482373cd-94be-48c4-b78b-94a392772664.png”:{“size”:28643,“md5”:“5cf9a9b09da59b55ade0dda277672370”},

“资源路径”:{“size”:资源大小,“md5”:“资源md5”},

擦。。。edge优质打印的锅
还是要等ero的出来吧dmm是经过定制的
这几周回大陆不是很方便搞这东西

1 个赞

666

先从

https://eowgame.jcbgame.com/eow-jp-game/bundle/version.json?time=1750059109198

获取config信息
内容如

[
	{
		"abName": "activity",
		"url": "",
		"version": "05669"
	},
	{
		"abName": "anima",
		"url": "",
		"version": "cc2e9"
	},
	{
		"abName": "barner",
		"url": "",
		"version": "a9bcc"
	},
	{
		"abName": "battle",
		"url": "",
		"version": "81b93"
	},
	{
		"abName": "bgs",
		"url": "",
		"version": "533f0"
	},
	{
		"abName": "career",
		"url": "",
		"version": "fe47e"
	},
	{
		"abName": "cat",
		"url": "",
		"version": "414df"
	},
	{
		"abName": "dictionary",
		"url": "",
		"version": "14b4a"
	},
	{
		"abName": "mp4",
		"url": "",
		"version": "60481"
	},
	{
		"abName": "ornament",
		"url": "",
		"version": "a7d8a"
	},
	{
		"abName": "sd",
		"url": "",
		"version": "51a37"
	},
	{
		"abName": "sound",
		"url": "",
		"version": "a13be"
	},
	{
		"abName": "texture",
		"url": "",
		"version": "cb2d0"
	},
	{
		"abName": "vd",
		"url": "",
		"version": "676e3"
	},
	{
		"abName": "icon",
		"url": "",
		"version": "1a282"
	},
	{
		"abName": "maps",
		"url": "",
		"version": "fcfad"
	},
	{
		"abName": "normal",
		"url": "",
		"version": "f70cd"
	},
	{
		"abName": "special",
		"url": "",
		"version": "719fa"
	},
	{
		"abName": "spdata",
		"url": "",
		"version": "49f27"
	},
	{
		"abName": "special2",
		"url": "",
		"version": "301bd"
	}
]

例如maps就是

https://eowgame.jcbgame.com/eow-jp-game/bundle/maps/cc.config.fcfad.json

uuid解码代码为


BASE64_CHARS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
BASE64_VALUES = [0] * 128
for idx, char in enumerate(BASE64_CHARS):
    BASE64_VALUES[ord(char)] = idx

HEX_CHARS = list('0123456789abcdef')
_t = ['', '', '', '']
UUID_TEMPLATE = _t + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + _t + _t
INDICES = [i for i, x in enumerate(UUID_TEMPLATE) if x != '-']


def decode_uuid(base64_str):
    result = UUID_TEMPLATE.copy()

    result[0] = base64_str[0]
    result[1] = base64_str[1]

    j = 2
    for i in range(2, 22, 2):
        lhs = BASE64_VALUES[ord(base64_str[i])]
        rhs = BASE64_VALUES[ord(base64_str[i + 1])]

        result[INDICES[j]] = HEX_CHARS[lhs >> 2]
        j += 1
        result[INDICES[j]] = HEX_CHARS[((lhs & 3) << 2) | (rhs >> 4)]
        j += 1
        result[INDICES[j]] = HEX_CHARS[rhs & 0xF]
        j += 1

    return ''.join(result)

input_str = "00E9xoPOlFz574sKTQmFwy"
decoded = decode_uuid(input_str)
print(decoded)

其中b4v+FRL1pCVaY1Bm046Env@f9941为

https://eowgame.jcbgame.com/eow-jp-game/bundle/maps/import/b4/[email protected]

我不是很懂如何使用这个import的json确认文件的后缀而且一个没有@的uuid似有多个含@的uuid例如
b4v+FRL1pCVaY1Bm046Env@f9941和b4v+FRL1pCVaY1Bm046Env@6c48a
这个游戏似乎采用实时下载不是全文件下载
APK安装包链接
https://dl-app.games.dmm.com/android/jp.co.fanzagames.eow_x_ap?stamp=1748408755

直接研究anima

这个里面包含的是spine的资源
不过js引擎的我应该是只接触过2.x的苍空物语那种,这种后面加hash的我没怎么见过不知道怎么还原 :dizzy_face:
等我自己改个轮子试试翻出了陈年老盘的文件。。。

import、native提取
import re
import os

def extract_arrays_from_json(json_path, uuids_path, import_path):
    # 1. 读取 JSON 文件内容
    try:
        with open(json_path, "r", encoding="utf-8") as f:
            content = f.read()
        print(f"✅ 读取 JSON 文件成功,长度: {len(content)} 字符")
    except Exception as e:
        print(f"❌ 读取文件失败: {str(e)}")
        return

    # 2. 提取 uuids 数组(匹配 [] 内的内容)
    try:
        uuids_match = re.search(r'"uuids":\s*\[([^\]]+)\]', content)
        if not uuids_match:
            print("警告: 未找到 uuids 数组")
            return
        
        uuids_str = uuids_match.group(1)
        # 清理元素(去除引号、空格、逗号)
        uuids = [item.strip().strip('"') for item in uuids_str.split(',') if item.strip()]
        print(f"✅ 提取到 uuids: {len(uuids)} 条")
        
        # 保存到文件
        with open(uuids_path, "w", encoding="utf-8") as f:
            f.write("\n".join(uuids))
        print(f"✅ 保存 uuids 到: {uuids_path}")
    except Exception as e:
        print(f"❌ 处理 uuids 失败: {str(e)}")

    # 3. 提取 import 数组(位于 versions.import 中)
    try:
        # 匹配 versions.import: [...]
        import_match = re.search(r'"versions":\s*\{\s*"import":\s*\[([^\]]+)\]', content)
        if not import_match:
            print("警告: 未找到 versions.import 数组")
            return
        
        import_str = import_match.group(1)
        # 按每两个元素分组(数字+字符串)
        import_items = [item.strip() for item in import_str.split(',') if item.strip()]
        formatted_import = []
        for i in range(0, len(import_items), 2):
            if i + 1 < len(import_items):
                # 数字保持原样,字符串去除引号
                num = import_items[i]
                string_val = import_items[i+1].strip('"')
                formatted_import.append(f"{num}, {string_val},")
        
        print(f"✅ 提取到 import: {len(formatted_import)} 行")
        
        # 保存到文件
        with open(import_path, "w", encoding="utf-8") as f:
            f.write("\n".join(formatted_import))
        print(f"✅ 保存 import 到: {import_path}")
    except Exception as e:
        print(f"❌ 处理 import 失败: {str(e)}")

    # 4. 提取 native 数组并追加到 import 文件后面
    try:
        native_match = re.search(r'"native":\s*\[([^\]]+)\]', content)
        if not native_match:
            print("警告: 未找到 native 数组")
            return
        
        native_str = native_match.group(1)
        # 按每两个元素分组(数字+字符串)
        native_items = [item.strip() for item in native_str.split(',') if item.strip()]
        formatted_native = []
        for i in range(0, len(native_items), 2):
            if i + 1 < len(native_items):
                # 数字保持原样,字符串去除引号
                num = native_items[i]
                string_val = native_items[i+1].strip('"')
                formatted_native.append(f"{num}, {string_val},")
        
        print(f"✅ 提取到 native: {len(formatted_native)} 行")
        
        # 追加到 import 文件后面
        with open(import_path, "a", encoding="utf-8") as f:
            f.write("\n")  # 添加一个空行分隔 import 和 native
            f.write("\n".join(formatted_native))
        print(f"✅ 追加 native 到: {import_path}")
    except Exception as e:
        print(f"❌ 处理 native 失败: {str(e)}")

if __name__ == "__main__":
    # 文件路径(同目录)
    json_path = "cc.config.fcfad.json"
    uuids_path = "uuids.txt"
    import_path = "import.txt"
    
    extract_arrays_from_json(json_path, uuids_path, import_path)
    print("\n🎉 提取完成!")

可以把uuid和import、native提取出来,uuid的序列是行数-1。有两个区别地址根据hash所在块区分import、native

https://eowgame.jcbgame.com/eow-jp-game/bundle/.../native/

https://eowgame.jcbgame.com/eow-jp-game/bundle/.../import/

具体的后缀可以看paths块:[“picture/beginner_bg_info2_en”,1,1](注意1,1
“types”:[“cc.Prefab”,
“cc.ImageAsset”,
“cc.Texture2D”,
“cc.SpriteFrame”,
“sp.SkeletonData”,
“cc.TextAsset”,
“cc.ParticleAsset”,
“cc.VideoClip”,
“cc.Asset”]}
(难绷,刚刚看的是dictionary)

总步骤
先根据import、native中hash前面的序列匹配uuids块和paths块的内容;然后解码得到uuid和hash拼接,最后根据path的结尾确定后缀。

@6c48a、@f9941那些似乎没什么用,直接处理不带@的uuid

你拿我那贴子里的照葫芦画瓢改改就能用了,这就非常普通的c2d,也没啥加密

from cocos2dAsset.downloader import assetDownloader
from cocos2dAsset.parser import ManifestJson
from requests import Session

session = Session()

cocosasset_config =  {
            'downloader_weburl': '',
            'downloader_assetroot': 'https://eowgame.jcbgame.com/eow-jp-game/bundle',  # 用于下载manifest的地址前缀
            'downloader_savepath': 'eowAsset', # 保存位置
            'downloader_threadnum': 10,
            'asset_baseurl': 'https://eowgame.jcbgame.com/eow-jp-game/bundle', # 用于manifestJson拼接的地址前缀
        }

    
def loadConfigAndCreate(configUrl,cocosconfig):
    assetConfig = session.get(configUrl).json()
    downloader = assetDownloader(cocosconfig)
    downloader.manifestOfmanifestData = assetConfig['assets']['bundleVers']
    del downloader.manifestOfmanifestData['resources'],downloader.manifestOfmanifestData['main'],downloader.manifestOfmanifestData['internal'] # 这几个是别的url
    downloader.jsurl = '/{typename}/index.{version}.js'
    downloader.configurl = '/{typename}/config.{version}.json'
    
    return downloader

downloader  =loadConfigAndCreate('https://eowgame.jcbgame.com/eow-jp-game/game/src/settings.0f043.json',cocosasset_config)
downloader.downloadAllManifest()
downloader.downloadAllFromManifest()

正在改…不过卡在setting那个文件里有几个404然后就自己搓轮子

import json

BASE64_CHARS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
BASE64_VALUES = [0] * 128
for idx, char in enumerate(BASE64_CHARS):
    BASE64_VALUES[ord(char)] = idx

HEX_CHARS = list('0123456789abcdef')
_t = ['', '', '', '']
UUID_TEMPLATE = _t + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + _t + _t
INDICES = [i for i, x in enumerate(UUID_TEMPLATE) if x != '-']

def decode_uuid(base64_str):
    if len(base64_str) != 22:
        return base64_str
    result = UUID_TEMPLATE.copy()
    result[0] = base64_str[0]
    result[1] = base64_str[1]

    j = 2
    for i in range(2, 22, 2):
        lhs = BASE64_VALUES[ord(base64_str[i])]
        rhs = BASE64_VALUES[ord(base64_str[i + 1])]

        result[INDICES[j]] = HEX_CHARS[lhs >> 2]
        j += 1
        result[INDICES[j]] = HEX_CHARS[((lhs & 3) << 2) | (rhs >> 4)]
        j += 1
        result[INDICES[j]] = HEX_CHARS[rhs & 0xF]
        j += 1

    return ''.join(result)

with open(r"C:\Users\username\Downloads\cc.config.cc2e9.json", 'r', encoding='utf-8') as f:
    data = json.load(f)

uuids = data['uuids']
versions = data['versions']['import']

version_map = {}
for i in range(0, len(versions), 2):
    line = versions[i]
    hash_ = versions[i + 1]
    version_map[line] = hash_

urls = []
for idx, short_id in enumerate(uuids):
    try:
        decoded = decode_uuid(short_id)
    except Exception as e:
        print(f"解析失败 uuid[{idx}]: {short_id} 错误: {e}")
        continue

    if idx not in version_map:
        print(f"跳过没有版本的uuid[{idx}]: {short_id}")
        continue

    hash_ = version_map[idx]
    url = f"https://eowgame.jcbgame.com/eow-jp-game/bundle/anima/import/{decoded[:2]}/{decoded}.{hash_}.json"
    urls.append(url)

with open('testurl.txt', 'w', encoding='utf-8') as f:
    f.write('\n'.join(urls))

print(f"{len(urls)} -> testurl.txt")

这个算个半成品导出部分json的url为了后续写多合一下载器打算尽量精简一点还是感谢大佬的思路

幻能边境anima下载工具.zip (1.6 MB)
以前的项目修改了一下,有源码,可以参考一下

1 个赞
但是他的好东西都在https://eowgame.jcbgame.com/eow-jp-game/bundle_sp 下其中spine资源在https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/
从这里获取hash
https://eowgame.jcbgame.com/eow-jp-game/proj.confg.json?time=1750078867078
得到资源import清单
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/spdata/index.dc71c.js
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/spdata/config.dc71c.json
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/normal/index.e5e33.js
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/normal/config.e5e33.json
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/index.081ba.js
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/config.081ba.json

例如
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/native/f4/f46ec681-55ae-4c56-80cc-dde9ec6cc911.7d010.png
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/native/3f/3f527303-7bd5-441a-bcfa-542b7f145e04.da5f2.bin
https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/import/3f/3f527303-7bd5-441a-bcfa-542b7f145e04.b63ce.json

写了cc.config…json的通用提取url分步脚本。

解析config.py
import re
import os

# 定义替换规则
replace_rules = {
    "cc.ImageAsset": ".json,.png",
    "cc.SpriteFrame": ".json",
    "sp.SkeletonData": ".json,.bin",
    "cc.TextAsset": ".json",
    "cc.VideoClip": ".json,.mp4"
}

BASE64_CHARS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
BASE64_VALUES = [0] * 128
for idx, char in enumerate(BASE64_CHARS):
    BASE64_VALUES[ord(char)] = idx

HEX_CHARS = list('0123456789abcdef')
_t = ['', '', '', '']
UUID_TEMPLATE = _t + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + _t + _t
INDICES = [i for i, x in enumerate(UUID_TEMPLATE) if x != '-']


def decode_uuid(base64_str):
    """将Base64编码的字符串还原为UUID格式"""
    if len(base64_str) != 22:
        return base64_str
    result = UUID_TEMPLATE.copy()

    result[0] = base64_str[0]
    result[1] = base64_str[1]

    j = 2
    for i in range(2, 22, 2):
        lhs = BASE64_VALUES[ord(base64_str[i])]
        rhs = BASE64_VALUES[ord(base64_str[i + 1])]

        result[INDICES[j]] = HEX_CHARS[lhs >> 2]
        j += 1
        result[INDICES[j]] = HEX_CHARS[((lhs & 3) << 2) | (rhs >> 4)]
        j += 1
        result[INDICES[j]] = HEX_CHARS[rhs & 0xF]
        j += 1

    return ''.join(result)


def process_uuids(input_file, output_file):
    """处理UUID文件,解码每个UUID并保存结果"""
    try:
        if not os.path.exists(input_file):
            print(f"错误: 文件 {input_file} 不存在")
            return

        with open(input_file, 'r', encoding='utf-8') as f:
            lines = f.readlines()

        processed_lines = []
        for line in lines:
            line = line.strip()
            if not line:
                continue

            # 分离@符号前的部分和@符号后的部分
            parts = line.split('@', 1)
            base64_uuid = parts[0]
            suffix = '@' + parts[1] if len(parts) > 1 else ''

            # 解码Base64 UUID
            decoded_uuid = decode_uuid(base64_uuid)

            # 重新组合结果
            processed_lines.append(decoded_uuid + suffix)

        # 写入结果到输出文件
        with open(output_file, 'w', encoding='utf-8') as f:
            f.write('\n'.join(processed_lines) + '\n')

        print(f"✅ 处理完成,已保存到 {output_file}")
        print(f"共处理 {len(processed_lines)} 行数据")

    except Exception as e:
        print(f"❌ 处理过程中出错: {str(e)}")


def extract_arrays_from_json(json_path, uuids_path, import_path, native_path, paths_path, types_path, names_path):
    # 1. 读取 JSON 文件内容
    try:
        with open(json_path, "r", encoding="utf-8") as f:
            content = f.read()
        print(f"✅ 读取 JSON 文件成功,长度: {len(content)} 字符")
    except Exception as e:
        print(f"❌ 读取文件失败: {str(e)}")
        return

    # 2. 提取 uuids 数组(匹配 [] 内的内容)
    try:
        uuids_match = re.search(r'"uuids":\s*\[([^\]]+)\]', content)
        if not uuids_match:
            print("警告: 未找到 uuids 数组")
            return

        uuids_str = uuids_match.group(1)
        # 清理元素(去除引号、空格、逗号)
        uuids = [item.strip().strip('"') for item in uuids_str.split(',') if item.strip()]
        print(f"✅ 提取到 uuids: {len(uuids)} 条")

        # 保存到文件
        with open(uuids_path, "w", encoding="utf-8") as f:
            f.write("\n".join(uuids))
        print(f"✅ 保存 uuids 到: {uuids_path}")

        # 处理提取的 uuids 文件
        process_uuids(uuids_path, uuids_path)

    except Exception as e:
        print(f"❌ 处理 uuids 失败: {str(e)}")

    # 3. 提取 import 数组(位于 versions.import 中)
    try:
        # 匹配 versions.import: [...]
        import_match = re.search(r'"versions":\s*\{\s*"import":\s*\[([^\]]+)\]', content)
        if not import_match:
            print("警告: 未找到 versions.import 数组")
            return

        import_str = import_match.group(1)
        # 按每两个元素分组(数字+字符串)
        import_items = [item.strip() for item in import_str.split(',') if item.strip()]
        formatted_import = []
        for i in range(0, len(import_items), 2):
            if i + 1 < len(import_items):
                # 数字保持原样,字符串去除引号
                num = import_items[i]
                string_val = import_items[i + 1].strip('"')
                formatted_import.append(f"{num}, {string_val},")

        print(f"✅ 提取到 import: {len(formatted_import)} 行")

        # 保存到文件
        with open(import_path, "w", encoding="utf-8") as f:
            f.write("\n".join(formatted_import))
        print(f"✅ 保存 import 到: {import_path}")
    except Exception as e:
        print(f"❌ 处理 import 失败: {str(e)}")

    # 4. 提取 native 数组并保存到 native.txt
    try:
        native_match = re.search(r'"native":\s*\[([^\]]+)\]', content)
        if not native_match:
            print("警告: 未找到 native 数组")
            return

        native_str = native_match.group(1)
        # 按每两个元素分组(数字+字符串)
        native_items = [item.strip() for item in native_str.split(',') if item.strip()]
        formatted_native = []
        for i in range(0, len(native_items), 2):
            if i + 1 < len(native_items):
                # 数字保持原样,字符串去除引号
                num = native_items[i]
                string_val = native_items[i + 1].strip('"')
                formatted_native.append(f"{num}, {string_val},")

        print(f"✅ 提取到 native: {len(formatted_native)} 行")

        # 保存到 native.txt 文件
        with open(native_path, "w", encoding="utf-8") as f:
            f.write("\n".join(formatted_native))
        print(f"✅ 保存 native 到: {native_path}")
    except Exception as e:
        print(f"❌ 处理 native 失败: {str(e)}")

    # 5. 提取 paths 块并保存到 paths.txt
    try:
        paths_match = re.search(r'"paths":\s*\{([^}]+)\}', content)
        if not paths_match:
            print("警告: 未找到 paths 块")
            return

        paths_str = paths_match.group(1)
        # 匹配每个路径项
        path_items = re.findall(r'"(\d+)":\s*\[\s*"([^"]+)"\s*,\s*(\d+)\s*,\s*(\d+)\s*\]', paths_str)

        if not path_items:
            print("警告: 在 paths 块中未找到有效路径项")
            return

        print(f"✅ 提取到 paths: {len(path_items)} 项")

        # 保存到 paths.txt 文件,格式:"0":["spine/path",4,1]
        with open(paths_path, "w", encoding="utf-8") as f:
            for key, path, num1, num2 in path_items:
                f.write(f'"{key}":["{path}",{num1},{num2}]\n')

        print(f"✅ 保存 paths 到: {paths_path}")
    except Exception as e:
        print(f"❌ 处理 paths 失败: {str(e)}")

    # 6. 提取 types 数组并保存到 type.txt
    try:
        types_match = re.search(r'"types":\s*\[([^\]]+)\]', content)
        if not types_match:
            print("警告: 未找到 types 数组")
            return

        types_str = types_match.group(1)
        # 清理元素(去除引号、空格、逗号)
        types = [item.strip().strip('"') for item in types_str.split(',') if item.strip()]
        print(f"✅ 提取到 types: {len(types)} 项")

        # 保存到 type.txt 文件
        with open(types_path, "w", encoding="utf-8") as f:
            f.write("\n".join(types) + "\n")

        print(f"✅ 保存 types 到: {types_path}")
    except Exception as e:
        print(f"❌ 处理 types 失败: {str(e)}")

    # 7. 提取 name 字段并保存到 name.txt
    try:
        # 匹配 "name": "anima" 格式(支持多层嵌套)
        name_match = re.search(r'"name"\s*:\s*"([^"]+)"', content)
        if not name_match:
            print("警告: 未找到 name 字段")
            return

        name_value = name_match.group(1)
        print(f"✅ 提取到 name: {name_value}")

        # 保存到 name.txt 文件(仅保存值,不含前缀)
        with open(names_path, "w", encoding="utf-8") as f:
            f.write(f"{name_value}\n")

        print(f"✅ 保存 name 到: {names_path}")
    except Exception as e:
        print(f"❌ 处理 name 失败: {str(e)}")


def replace_types_content(types_path):
    """对types.txt文件中的内容进行替换操作"""
    try:
        with open(types_path, 'r', encoding='utf-8') as file:
            lines = file.readlines()

        # 进行替换操作
        new_lines = []
        for line in lines:
            replaced = False
            for old_text, new_text in replace_rules.items():
                if old_text in line:
                    new_lines.append(new_text + '\n')
                    replaced = True
                    break
            if not replaced:
                new_lines.append('\n')

        # 将替换后的内容写回到types.txt文件
        with open(types_path, 'w', encoding='utf-8') as file:
            file.writelines(new_lines)

        print("替换完成,结果已保存到types.txt文件中。")
    except FileNotFoundError:
        print("未找到types.txt文件,请确保该文件存在。")
    except Exception as e:
        print(f"处理文件时出现错误: {e}")


if __name__ == "__main__":
    # 文件路径(同目录)
    json_path = "cc.config.cc2e9.json"
    uuids_path = "uuids.txt"
    import_path = "imports.txt"
    native_path = "natives.txt"
    paths_path = "paths.txt"
    types_path = "types.txt"
    names_path = "name.txt"

    extract_arrays_from_json(json_path, uuids_path, import_path, native_path, paths_path, types_path, names_path)
    replace_types_content(types_path)
    print("\n🎉 提取、UUID 处理和类型替换完成!")
提取url.py
import os

def main():
    # 确保输出文件存在
    with open('output.txt', 'w', encoding='utf-8') as f:
        f.write('')
    
    # 读取name.txt第一行
    name = ''
    try:
        with open('name.txt', 'r', encoding='utf-8') as f:
            name = f.readline().strip()
        print(f"[INFO] 读取name: {name}")
    except FileNotFoundError:
        print("错误:未找到name.txt文件")
        return
    
    # 读取paths.txt
    paths_data = {}
    try:
        with open('paths.txt', 'r', encoding='utf-8') as f:
            for line_num, line in enumerate(f, 1):
                line = line.strip()
                if not line or line.startswith('#'):
                    continue
                
                try:
                    # 提取序列号
                    serial = line.split(':[')[0].strip('"')
                    
                    # 提取路径和数字
                    path_part = line.split(':[')[1].split(']')[0]
                    path = path_part.split(',')[0].strip('"')
                    num = int(path_part.split(',')[1].strip()) - 1
                    
                    paths_data[serial] = {
                        'path': path,
                        'num': num,
                        'line_num': line_num
                    }
                    print(f"[INFO] 解析paths.txt第{line_num}行,序列号: {serial}, 路径: {path}, 数字: {num}")
                except Exception as e:
                    print(f"[ERROR] 解析paths.txt第{line_num}行时出错: {line}, 错误: {e}")
    except FileNotFoundError:
        print("错误:未找到paths.txt文件")
        return
    
    # 读取types.txt(增强空行处理)
    types_data = []
    try:
        with open('types.txt', 'r', encoding='utf-8') as f:
            for line_num, line in enumerate(f, 1):
                raw_line = line.strip('\n\r')  # 仅去除换行符
                processed_line = raw_line.strip()  # 去除首尾空格
                
                # 记录原始行和处理后的行(用于调试)
                types_data.append({
                    'raw': raw_line,
                    'processed': processed_line
                })
            
            print(f"[INFO] 读取types.txt,共{len(types_data)}行")
            # 打印前几行用于调试
            print("[DEBUG] types.txt前5行内容:")
            for i in range(min(5, len(types_data))):
                print(f"  行{i}: 原始='{types_data[i]['raw']}', 处理后='{types_data[i]['processed']}'")
    except FileNotFoundError:
        print("错误:未找到types.txt文件")
        return
    
    # 读取natives.txt
    natives_data = {}
    try:
        with open('natives.txt', 'r', encoding='utf-8') as f:
            for line_num, line in enumerate(f, 1):
                line = line.strip()
                if not line or line.startswith('#') or ';;;' in line:
                    continue
                
                parts = line.split(',')
                if len(parts) >= 2:
                    serial = parts[0].strip()
                    hash_val = parts[1].strip()
                    natives_data[serial] = hash_val
                    print(f"[INFO] 解析natives.txt第{line_num}行,序列号: {serial}, hash: {hash_val}")
    except FileNotFoundError:
        print("错误:未找到natives.txt文件")
        return
    
    # 读取uuids.txt
    uuids_data = []
    try:
        with open('uuids.txt', 'r', encoding='utf-8') as f:
            uuids_data = [line.strip() for line in f if line.strip()]
        print(f"[INFO] 读取uuids.txt,共{len(uuids_data)}个UUID")
    except FileNotFoundError:
        print("错误:未找到uuids.txt文件")
        return
    
    # 读取imports.txt
    imports_data = {}
    try:
        with open('imports.txt', 'r', encoding='utf-8') as f:
            for line_num, line in enumerate(f, 1):
                line = line.strip()
                if not line or line.startswith('#'):
                    continue
                
                parts = line.split(',')
                if len(parts) >= 2:
                    serial = parts[0].strip()
                    hash_val = parts[1].strip()
                    imports_data[serial] = hash_val
                    print(f"[INFO] 解析imports.txt第{line_num}行,序列号: {serial}, hash: {hash_val}")
    except FileNotFoundError:
        print("错误:未找到imports.txt文件")
        return

    # 处理每个路径条目
    with open('output.txt', 'a', encoding='utf-8') as output_file:
        for serial, data in paths_data.items():
            path = data['path']
            num = data['num']
            line_num = data['line_num']
            
            print(f"\n[处理路径] paths.txt第{line_num}行,序列号: {serial}")
            
            # 计算type
            type_idx = num + 1
            print(f"[INFO] 计算type_idx: {type_idx}")
            
            # 确保type_idx在有效范围内
            if type_idx >= len(types_data):
                print(f"[ERROR] type索引{type_idx}超出types.txt范围(共{len(types_data)}行),跳过处理")
                continue
            
             # 获取后缀(增强空行处理)
            if type_idx >= len(types_data):
                print(f"[ERROR] type索引{type_idx}超出types.txt范围,跳过处理")
                continue
            
            line_data = types_data[type_idx]
            raw_line = line_data['raw']
            processed_line = line_data['processed']
            
            print(f"[INFO] types.txt第{type_idx}行 - 原始: '{raw_line}', 处理后: '{processed_line}'")
            
            # 空行或全空格行视为无后缀
            if not processed_line:
                houzhui_list = []
            else:
                houzhui_list = [h.strip() for h in processed_line.split(',')]
            
            houzhui1 = houzhui_list[0] if len(houzhui_list) > 0 else ''
            houzhui2 = houzhui_list[1] if len(houzhui_list) > 1 else ''
            print(f"[INFO] houzhui1: {houzhui1}, houzhui2: {houzhui2}")
            
            # 确定文件名
            file_name = determine_file_name(path)
            print(f"[INFO] 文件名: {file_name}")
            
            # 获取uuid
            uuid_idx = int(serial)
            if uuid_idx >= len(uuids_data):
                print(f"[ERROR] uuid索引{uuid_idx}超出uuids.txt范围(共{len(uuids_data)}个),跳过处理")
                continue
            uuid_val = uuids_data[uuid_idx]
            uu = uuid_val[:2] if uuid_val else ''
            print(f"[INFO] uuid: {uuid_val}, uu: {uu}")
            
            # 处理第一个后缀(.json等,从imports获取hash)
            if houzhui1:
                hash1 = imports_data.get(serial, '')
                print(f"[INFO] imports中序列号{serial}的hash: {hash1}")
                if not hash1:
                    print(f"[WARN] 未找到序列号{serial}对应的imports hash,跳过.houzhui1 URL")
                else:
                    url = f"https://eowgame.jcbgame.com/eow-jp-game/bundle/{name}/import/{uu}/{uuid_val}.{hash1}{houzhui1}\n"
                    output_file.write(url)
                    print(f"[SUCCESS] 生成import URL: {url.strip()}")
            
            # 处理第二个后缀(.png/.bin等,从natives获取hash)
            if houzhui2:
                hash2 = natives_data.get(serial, '')
                print(f"[INFO] natives中序列号{serial}的hash: {hash2}")
                if not hash2:
                    print(f"[WARN] 未找到序列号{serial}对应的natives hash,跳过.houzhui2 URL")
                else:
                    url = f"https://eowgame.jcbgame.com/eow-jp-game/bundle/{name}/native/{uu}/{uuid_val}.{hash2}{houzhui2}\n"
                    output_file.write(url)
                    print(f"[SUCCESS] 生成native URL: {url.strip()}")

def determine_file_name(path):
    """根据路径确定文件名,处理spriteFrame和texture的特殊情况"""
    parts = path.split('/')
    last_part = parts[-1]
    
    if 'spriteFrame' in parts or 'texture' in parts:
        for i in range(len(parts)-2, -1, -1):
            if parts[i] not in ['spriteFrame', 'texture']:
                return parts[i]
        return last_part
    else:
        return last_part.split('.')[0] if '.' in last_part else last_part

if __name__ == "__main__":
    main()
下载.py
import requests
import os
import time
from concurrent.futures import ThreadPoolExecutor
import urllib3
from datetime import datetime

# 禁用 InsecureRequestWarning 警告
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

def get_simplified_timestamp():
    now = datetime.now()
    return now.strftime("%y%m%d%H%M%S")

def download_file(link, download_dir, total_files, index, digit_count, success_files, failed_files):
    # 提取文件名和路径
    path_parts = link.split("/")
    file_name = path_parts[-1].split("?")[0]
    relative_path = "/".join(path_parts[3:-1])  # 假设前三个部分是协议和域名,可根据实际情况调整
    local_dir = os.path.join(download_dir, relative_path)
    local_path = os.path.join(local_dir, file_name)

    max_retries = 3  # 最大重试次数
    retry_delay = 5  # 重试延迟时间(秒)

    # 创建本地目录
    os.makedirs(local_dir, exist_ok=True)

    for attempt in range(max_retries):
        try:
            # 检查本地文件是否存在且大小相同(忽略证书验证)
            if os.path.exists(local_path):
                local_size = os.path.getsize(local_path)
                headers = requests.head(link, allow_redirects=True, timeout=10, verify=False).headers
                remote_size = int(headers.get('Content-Length', 0))
                if local_size == remote_size and remote_size != 0:
                    print(f"[跳过] {total_files}/{index:0{digit_count}d} 已存在且大小一致: {local_path}")
                    success_files.append(file_name)
                    return

            response = requests.get(link, stream=True, timeout=10, verify=False)
            response.raise_for_status()

            with open(local_path, 'wb') as file:
                for chunk in response.iter_content(chunk_size=8192):
                    if chunk:
                        file.write(chunk)

            if os.path.exists(local_path):
                final_size = os.path.getsize(local_path)
                if final_size > 0:
                    print(f"[成功] {total_files}/{index:0{digit_count}d} 下载完成: {local_path}")
                    success_files.append(file_name)
                    return
                else:
                    raise Exception("下载后文件大小为0")
        except (requests.RequestException, Exception) as e:
            if attempt < max_retries - 1:
                print(f"[尝试 {attempt + 1}/{max_retries}] {total_files}/{index:0{digit_count}d} 下载失败: {str(e)}, 重试中...")
                time.sleep(retry_delay)
            else:
                print(f"[失败] {total_files}/{index:0{digit_count}d} 下载失败: {str(e)}")
                failed_files.append(file_name)


def count_files_in_directory(directory):
    file_count = 0
    for root, dirs, files in os.walk(directory):
        file_count += len(files)
    return file_count

def main_download():
    file_path = "output.txt"
    download_dir = os.path.join(os.getcwd(), "com.superhgame.rpg.emma")

    if not os.path.exists(file_path):
        print("❌ 未找到下载列表文件")
        return

    with open(file_path, "r", encoding="utf-8") as f:
        download_links = [line.strip() for line in f if line.strip()]

    if not download_links:
        print("❌ 下载列表中没有有效链接")
        return

    total_files = len(download_links)
    digit_count = len(str(total_files))
    os.makedirs(download_dir, exist_ok=True)

    success_files = []
    failed_files = []

    print(f"✅ 准备下载 {total_files} 个文件,保存到 {download_dir}")
    print("-" * 60)

    with ThreadPoolExecutor(max_workers=32) as executor:
        futures = []
        for index, link in enumerate(download_links, start=1):
            futures.append(executor.submit(
                download_file,
                link, download_dir, total_files, index, digit_count,
                success_files, failed_files
            ))

        for future in futures:
            future.result()

    print("\n" + "=" * 60)
    print(f"📦 下载完成 | 成功: {len(success_files)} | 失败: {len(failed_files)}")
    if failed_files:
        print("\n❌ 以下文件下载失败(请手动检查):")
        for fname in failed_files:
            print(f" - {fname}")
    else:
        print("\n🎉 所有文件均下载成功!")

    # 计算目标文件夹中的实际文件数
    actual_file_count = count_files_in_directory(download_dir)
    print(f"\n📁 目标文件夹中的实际文件数: {actual_file_count}")
    print(f"📋 下载列表中的文件数: {total_files}")

    if actual_file_count == total_files:
        print("✅ 实际文件数与下载数相同。")
    else:
        print("❌ 实际文件数与下载数不同,请检查。")

    # 等待用户输入,阻止程序立即退出
    input("按回车键退出...")


if __name__ == "__main__":
    try:
        import requests
    except ImportError:
        print("❌ 缺少requests库,请先安装:pip install requests")
        exit(1)

    main_download()

不知道有没有帮助。

1 个赞

目前笨办法就是解析uuid构造import的url然后检测固定节点是否为sp.SkeletonData如果是就提取atlas数据并使用头部那个uuid构造资源url
半成品轮子

import json

BASE64_CHARS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
BASE64_VALUES = [0] * 128
for idx, char in enumerate(BASE64_CHARS):
    BASE64_VALUES[ord(char)] = idx

HEX_CHARS = list('0123456789abcdef')
_t = ['', '', '', '']
UUID_TEMPLATE = _t + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + ['-'] + _t + _t + _t
INDICES = [i for i, x in enumerate(UUID_TEMPLATE) if x != '-']

def decode_uuid(base64_str):
    if len(base64_str) != 22:
        return base64_str
    result = UUID_TEMPLATE.copy()
    result[0] = base64_str[0]
    result[1] = base64_str[1]

    j = 2
    for i in range(2, 22, 2):
        lhs = BASE64_VALUES[ord(base64_str[i])]
        rhs = BASE64_VALUES[ord(base64_str[i + 1])]

        result[INDICES[j]] = HEX_CHARS[lhs >> 2]
        j += 1
        result[INDICES[j]] = HEX_CHARS[((lhs & 3) << 2) | (rhs >> 4)]
        j += 1
        result[INDICES[j]] = HEX_CHARS[rhs & 0xF]
        j += 1

    return ''.join(result)

with open(r"C:\Users\username\Downloads\cg.json", 'r', encoding='utf-8') as f:
    data = json.load(f)

uuids = data['uuids']
versions = data['versions']['import']

# 建立行号 → hash 映射
version_map = {}
for i in range(0, len(versions), 2):
    line = versions[i]
    hash_ = versions[i + 1]
    version_map[line] = hash_

urls = []
for idx, short_id in enumerate(uuids):
    try:
        # 拆分 base64 与附加信息
        if '@' in short_id:
            base_part, extra = short_id.split('@', 1)
        else:
            base_part, extra = short_id, None

        decoded = decode_uuid(base_part)

        # 拼回附加部分(如果有)
        if extra:
            decoded_full = f"{decoded}@{extra}"
        else:
            decoded_full = decoded

    except Exception as e:
        print(f"解析失败 uuid[{idx}]: {short_id} 错误: {e}")
        continue

    if idx not in version_map:
        print(f"跳过没有版本的uuid[{idx}]: {short_id}")
        continue

    hash_ = version_map[idx]
    url = f"https://eowgame.jcbgame.com/eow-jp-game/bundle_sp/special/import/{decoded[:2]}/{decoded_full}.{hash_}.json"
    urls.append(url)

with open('testurl.txt', 'w', encoding='utf-8') as f:
    f.write('\n'.join(urls))

print(f"{len(urls)}URL -> testurl.txt")