Weekly Tech Radar · 2025-12-w3
This week covers the GPT-5.2 split reactions, the ChatGPT App Directory launch, the Zara AI model debate, plus briefs on Apple SHARP and Meta SAM Audio.
- OpenAI releases GPT-5.2: users call it a "soulless scientist"
- ChatGPT launches an App Directory: ChatGPT can "use third-party apps" directly
- Zara sparks debate: are AI model images an efficiency tool or a job threat?
- Apple open-sources SHARP: AI turns photos into 3D in under a second
- Meta releases SAM Audio: a breakthrough in multimodal audio separation
- AI homework grading enters schools: "grade today, explain today"
- Disney announces a $1B investment in OpenAI, allowing Sora to generate Mickey and other characters
Focus 1 • OpenAI releases GPT-5.2: users call it a "soulless scientist"
Core changes in GPT-5.2:
- Stronger reasoning: designed for multi-step tasks with more stable long-context understanding. In high-difficulty domains like finance or life sciences, it can "think" for up to an hour.
- Lower hallucination rate: compared with GPT-5.1, factual errors are down 20-40%, with an average hallucination rate around 0.8%.
- Upgraded multimodal understanding: can parse complex visual data such as technical charts, UI screenshots, and medical imaging, with about half the error rate of the previous model.
- Long context window: supports up to 405k tokens (about 300k Chinese characters), staying coherent across giant documents and long workflows.
Online reactions to GPT-5.2 are sharply split, often describing it as "smart but dull."
- "Soulless scientist": impressive at specialized, open-ended problems, but many users say it lost the "fun" and personality of prior models, becoming rigid and dry.
- High censorship: rated the most strictly filtered flagship model across multiple community benchmarks; even academic or fictional contexts often trigger refusals to discuss "sensitive" historical or cultural topics.
- Coding battle: compared with other mainstream models, GPT-5.2 is seen as stronger on brute-force cracking and logic, but many developers still prefer Claude for real software work, saying it better understands architecture context and produces more "human" code style.
Focus 2 • OpenAI launches App Directory: ChatGPT can "use third-party apps" directly
OpenAI announced the official opening of the ChatGPT App Directory to third-party developers. Developers can submit apps so users can complete third-party actions directly inside ChatGPT conversations. For example, users can order groceries, create presentations, search for homes, or interact more deeply with third-party services.
The launch signals ChatGPT's shift toward an "AI platform." Regardless of device, as long as you can use ChatGPT, you can access supported third-party apps. Its integration, distribution model, and business opportunities may become a new battleground for the AI industry.
Among the first partners are Spotify, Expedia, Zillow, FI, and Canva. Users can interact with these services in natural language to find music, plan trips, browse listings, or produce designs, all within ChatGPT.
For developers, this is a new distribution channel. Through the ChatGPT interface, third-party apps can reach a massive user base without relying on traditional app stores or standalone websites. It may shift how users discover and use digital services, from "download the app" to "use it directly in AI chat."
Focus 3 • Zara sparks debate: are AI model images an efficiency tool or a job threat?
This week, Zara announced large-scale use of generative AI to create product promo images on its e-commerce platform, emphasizing it complements rather than replaces real models. The move sparked broad debate about creativity, employment, and authenticity in advertising.
- Digital "try-on": Zara uses AI to digitally "dress" new designs on existing photos of real models, so models do not need to return to the studio for every new item.
- Compensation and consent: unlike some controversial cases, Zara obtains model consent before AI compositing. Reports say models receive pay comparable to traditional shoots even without being on set.
- Efficiency and speed: this approach shortens the cycle from design to listing, helping the brand showcase new items faster and adapt to fast-moving market trends.
- Industry concerns: despite efficiency gains, groups like the Association of Photographers in London worry it will significantly reduce work for photographers, stylists, and production teams.
- Industry trend: Zara is not alone. H&M and Zalando have also pursued "AI models" and digital image generation plans.
This move marks a deep 2025 fashion-industry tug-of-war between "maximum efficiency" and "ecosystem sustainability." Zara is trying to balance technology with human labor by paying equal rates, but that has not eased concerns about potential negative impacts.
At the heart of the debate is that while algorithm-led workflows can help brands win in rapid turnover, they also squeeze the livelihood of photographers, stylists, and other behind-the-scenes roles.
At the same time, public doubts about whether "AI images equal real products" reflect a broader anxiety about losing fashion's authenticity amid digital convenience. This is not just a change in photography methods, but a fundamental challenge to how value is distributed across the future fashion supply chain.
Brief 1 • Apple open-sources SHARP: AI turns photos into 3D in under a second
Apple released an open-source AI model called SHARP. It uses 3D Gaussian splatting to turn a single 2D photo into an interactive 3D scene in under a second.
- Ultra-fast generation: with just one ordinary photo, it generates a 3D model on a standard GPU in one second. Compared with diffusion models, it is about three orders of magnitude faster (around 1000x).
- Visual quality leap: compared with previous top models, its perceptual image similarity improves significantly, meaning clearer details and structures closer to the real world.
- Metric-scale precision: generated 3D models preserve absolute scale, supporting realistic camera movement. The model also generalizes well to new scenes not seen during training.
The model is now open-sourced on GitHub and Hugging Face.
Brief 2 • Meta releases SAM Audio: a breakthrough in multimodal audio separation
Meta released SAM Audio this week, enabling precise extraction of specific sounds (voices, instruments, animal calls) from complex environments.
- Multimodal interaction: SAM Audio supports three intuitive modes.
- Text prompts: enter "guitar" or "dog bark" to extract or filter the target sound.
- Visual guidance: click an object in a video (like a guitarist) and the model uses visual cues to separate its audio.
- Time selection: select a noisy segment on the timeline to remove it precisely.
Beyond video editing, podcasts, and film production, Meta is working with hearing-aid maker Starkey to explore accessibility uses, such as real-time separation and enhancement of specific voices for people with hearing loss.
The model is now open-sourced on GitHub and Hugging Face.
Brief 3 • AI homework grading enters schools: "grade today, explain today"
Many primary and secondary schools are piloting "AI homework grading machines" to achieve same-day grading and explanation.
- Reduce burden and boost efficiency: the system can scan and grade an entire class within minutes, summarize common mistakes, and generate learning analysis reports, greatly shortening feedback cycles. It also creates a personalized "digital mistake book" for each student.
- Accuracy and fairness: while OCR is mature, errors persist with complex handwriting and subjective questions, raising concerns about scoring fairness.
- Educational dependence: experts worry overreliance on AI could weaken teacher-student emotional interaction and make learning more mechanical.
Education authorities emphasize AI should be a "teaching assistant" rather than a replacement, and that human review is required to preserve warmth and objectivity in education.
Brief 4 • Disney announces a $1B investment in OpenAI, allowing Sora to generate Mickey and other characters
Disney officially announced a $1 billion investment in OpenAI and a three-year licensing deal allowing users of the Sora video generation tool to use more than 200 Disney, Marvel, Pixar, and Star Wars characters.
The generated videos are typically short-form social content and are expected to roll out to Sora and ChatGPT users in early 2026.
Disney CEO Bob Iger said the partnership aims to combine "iconic stories and characters" with cutting-edge AI, letting fans create in unprecedented ways. The agreement marks a shift toward compliant commercial collaboration between Hollywood giants and leading AI companies after a period of copyright disputes.
At the same time, Disney sent a cease-and-desist letter to Google, accusing it of using Disney content to train AI models without permission.