View in English

  • 打开菜单 关闭菜单
  • Apple Developer
搜索
关闭搜索
  • Apple Developer
  • 新闻
  • 探索
  • 设计
  • 开发
  • 分发
  • 支持
  • 账户
在“”范围内搜索。

快捷链接

5 快捷链接

视频

打开菜单 关闭菜单
  • 专题
  • 相关主题
  • 所有视频
  • 关于

返回 WWDC25

大多数浏览器和
Developer App 均支持流媒体播放。

  • 简介
  • 转写文稿
  • 代码
  • 了解面向空间商务 App 的增强功能

    在去年发布的访问和企业级功能基础上,visionOS 26 中最新的增强功能和 API 实现了进一步扩展,快来一探究竟吧!了解这些全新功能如何助你轻松构建模型训练工作流程、增强视频影像,并在本地网络上统一坐标系,从而为企业内部 App 打造协作体验。

    章节

    • 0:04 - Introduction
    • 1:37 - Streamline development
    • 4:54 - Enhance user experience
    • 13:37 - Visualize the environment
    • 24:01 - Next steps

    资源

    • Building spatial experiences for business apps with enterprise APIs for visionOS
    • Implementing object tracking in your visionOS app
      • 高清视频
      • 标清视频

    相关视频

    WWDC25

    • 与附近用户共享 visionOS 体验

    WWDC24

    • 为你介绍适用于 visionOS 的企业 API
    • 使用 ARKit 打造更出色的空间计算体验
    • 探索适用于 visionOS 的对象追踪
  • 搜索此视频…

    Hi, my name is Alex Powers, and I’m an engineer on the Enterprise team for visionOS. It’s great to be back at WWDC.

    Last year, we introduced the first set of Enterprise APIs for visionOS. Since then, we’ve been working hard to bring you even more enterprise capabilities.

    Before exploring the new features, let me review the fundamental requirements for the Enterprise APIs.

    Because they offer broad utility and deeper device access, accessing these APIs requires a managed entitlement along with a license file tied to your developer account. And designed for proprietary, in-house apps, developed by your organization for your employees, or for custom apps you build for another business to distribute internally.

    With those considerations in mind, I’m excited to take you through the new Enterprise APIs and some major improvements to the existing APIs.

    I’ll start with changes that will streamline your development and make it easier for you to access enterprise capabilities. Next, I’ll show you a way to enhance the user experience by providing new ways to interact with windows, share content with people nearby, and protect sensitive information. And finally, I'll explore new capabilities to visualize your environment.

    Let me begin by showing you some ways we’re making it easier to access enterprise capabilities and streamline your development.

    Starting with wider API access, we’ve made some changes this year to give you wider access to several APIs we introduced last year.

    We previously introduced the ability for an app to access external video from USB Video Class devices through the Vision Pro Developer Strap. The API allows your app to leverage UVC-compatible webcams for enhanced video conferencing, specialized imaging devices for remote diagnostics, or industrial inspection cameras for quality control.

    We’ve also made access to Neural Engine available for advanced on-device machine learning. I’m happy to say that with the latest visionOS, these APIs are now available for all developers. You’ll be able to access UVC video and Neural Engine without an enterprise license or an entitlement.

    Last year, we introduced object tracking for visionOS, enabling powerful experiences where your app can recognize and track specific real world objects. This year, we're adding the ability to train directly from the command line.

    This means you can now automate the model training process, integrate it into your existing pipelines, and manage your object tracking assets more efficiently, all without needing to manually use the CreateML app for each individual object. This tool gives you all the same controls as the CreateML app. I hope this will unlock new workflows and make iterating on your object tracking features faster and more scalable.

    We're also making enterprise license management simpler.

    You can now access your license files directly within your Apple Developer account. Renewals are automatically pushed to your apps over the air, and we’ve created the Vision Entitlement Services framework. This framework makes it straightforward to check if your application is properly licensed and approved for specific features.

    Using Vision Entitlement Services, you can determine if your app can access specific enterprise API, like main camera access. See your license status and its expiration date, and for apps using the Increased Performance Headroom entitlement, you can verify this move before intensive tasks to ensure the best performance.

    As an example, let me show you how you would determine if your app is configured properly to access the main camera.

    First, import the framework. Then use the shared Enterprise License Details singleton to first confirm that the license is valid, and then confirm that the license is approved for mainCameraAccess.

    So that’s how the latest visionOS expands API access and makes developing models and managing your enterprise apps easier.

    Now let me walk through some new ways for enhancing user experiences by building more intuitive, collaborative, and secure spatial applications. First, we’re introducing a way to make window interactions in spatial environments more natural, especially when moving while wearing Vision Pro.

    We call it Window Follow Mode.

    When enabled, it ensures content remains accessible and relative to your position. To enable this behavior, you need the window-body-follow entitlement. This entitlement is requested and managed as a licensed entitlement.

    Once the entitlement is granted and included in your app, this behavior will be enabled for any window on the device in all applications.

    Standard windows in visionOS remain fixed in space where you place them. But imagine you have a dashboard, a set of instructions, or reference material that you need to glance at frequently while performing a task that requires you to move.

    Window Follow Mode allows you to choose a window and have it move with you as you move from place to place.

    Let's see Window Follow Mode in action.

    Here I am focused on a project at my workbench. At the same time, my manipulator arm is executing a task. I want to monitor the manipulator status, but without constantly interrupting my main task. To enable Window Follow Mode for the status window, I click and hold the window close control. I choose Start Follow Mode.

    And there we go. The status window will follow me as I move back to my work area.

    So that’s Window Follow Mode. One great way to enhance your user experience. But spatial computing is truly at its best when enabling shared, collaborative experiences.

    And that’s precisely what shared coordinate spaces enable. This feature allows people who are physically together to share their spatial experiences with each other.

    Everyone can naturally interact with and discuss the app’s content as if it were physically present. We provide high-level APIs using SharePlay that automatically handle the discovery, connection, and session management for shared coordinate spaces.

    We have a whole session on this called “Share visionOS experiences with nearby people.” While SharePlay offers fantastic ease of use out of the box, we understand that some scenarios demand more control. You might need to integrate with your own custom networking infrastructure. Or maybe your enterprise requirements mean you have to handle device communication directly.

    For these use cases, we’re introducing a new ARKit API for establishing shared coordinate spaces specifically for enterprise customers. It’s called the SharedCoordinateSpaceProvider. This API allows multiple participants to align their coordinate systems. This is achieved by exchanging specific ARKit-generated data over your chosen local network transport. Each participant continuously shares this data with the others. This continuous sharing creates a common coordinate system, enabling shared world anchors to appear consistently for everyone.

    With that, let me run through how you would use this API to build a custom shared experience.

    Using SharedCoordinateSpaceProvider is straightforward if you worked with ARKit data providers before.

    Similar to World Tracking or Hand Tracking, you instantiate it and run it on your active ARKitSession. Once running, the SharedCoordinateSpaceProvider generates the necessary alignment information encapsulated in CoordinateSpaceData objects. You retrieve this data using a pull-based API, the provider’s nextCoordinateSpaceData() function.

    My application is responsible for transmitting this CoordinateSpaceData to the other participants to establish a shared coordinate space. This gives you full control. You can use any networking layer that you want.

    Conversely, when your app receives CoordinateSpaceData from another participant over the network, you provide it to the local SharedCoordinateSpaceProvider by calling its push() method. Each piece of incoming data is tagged with the sender’s unique participantID. Finally, the provider helps you manage the session lifecycle. It offers an eventUpdates async sequence to inform you about important changes, such as when a participant has left the shared space.

    Let me walk through an example of how this works in code.

    I start by creating a SharedCoordinateSpaceProvider and running it on my ARKitSession. When data arrives from another participant on my network, I update the local provider's understanding using push data.

    To get the data my device needs to share, I call the nextCoordinateSpaceData() function. This gives me the CoordinateSpaceData object representing my local state, ready to be broadcast over my network.

    Finally, this logic forms the heart of my custom shared space management, bridging my networking layer with ARKit’s coordinate alignment.

    So that’s ARKit’s Shared Coordinate API for enterprise developers, a great way to add collaboration to in-house apps.

    My final user experience enhancement is all about data privacy and security. Many enterprise apps handle sensitive information, financial data, patient records, proprietary designs or confidential communications. And while incredibly useful, capabilities such as SharePlay, screen captures and recordings, or even Screen Mirroring can inadvertently expose this sensitive data.

    So today, there’s a new API that gives you control over what can get captured and shared with others.

    And it’s the new contentCaptureProtected view modifier for SwiftUI. It’s supported in apps with the protected content entitlement. You simply add it to any user interface element or even entire RealityKit scenes.

    When content is marked as protected, the system will automatically obscure it in any screen captures, recordings, mirrored or shared views. However, the content remains perfectly visible to the user wearing the device. Here’s an example of a common enterprise use case.

    I have an app that serves as a central repository for my company documents, accessible to all employees. However, certain documents within the system contain sensitive information and shouldn’t be shared widely.

    I’m sharing these documents with my team in the other office. Here, the team can see our meeting notes and the plan for next year. Both of these documents are visible to me and shared with the team. Here, you can see the quarterly report has a lock icon.

    This report shouldn’t be shared and so my team can’t see it on the remote display.

    Now that you’ve seen protected content in action, let's see how to implement it.

    In this example, I have a document view that contains a child view that I’ve called SensitiveDataView. It has information that needs to be seen only on Vision Pro. To protect it, I append the view modifier, contentCaptureProtected, and I’m done. The system will now obscure the feed whenever any attempt is made to share this content. You can also integrate this content protection with authentication flows like Optic ID or corporate single sign-on.

    So that’s how to protect app 2D and 3D content. It can be protected with the same simple modifier.

    Those features enhance the experience within the digital space. Now, I’ll focus on some features designed to help visualize the environment and bridge the physical and digital worlds.

    First, we’re expanding camera access on Vision Pro.

    Vision Pro uses its sophisticated camera system to capture the wearer’s environment with the forward cameras providing the passthrough experience.

    Last year, we shipped an API to provide access to the device's left main camera video feed. And this year, we’ve expanded the API to provide direct access to the individual left or right cameras, or access both for stereo processing and analysis. If you’re already familiar, it’s the CameraFrameProvider API in ARKit.

    And now, camera feed support is available in both the Immersive Space and Shared Space environments, allowing your app to function alongside other apps and windows.

    So that’s how the latest visionOS makes camera access even more flexible.

    Now let me show you a new way to visualize details in your surroundings.

    Professionals often need to monitor specific details in their work area. For example, technicians need to read small gauges on complex machinery, or inspectors might need to examine components in poorly lit areas.

    To address this, we’re introducing a powerful new feature that allows people wearing Vision Pro to select a specific area in their real-world view and provide a dedicated video feed of that area in its own window.

    This feed can be magnified or enhanced, making critical details clear.

    There’s a new SwiftUI view in VisionKit called CameraRegionView.

    You simply position this window visually over the area you want to enhance. Then, the CameraRegionView uses its own position to provide the appropriate region and space for the virtual camera.

    If you require more fine-grained control, you can use the new CameraRegionProvider API in ARKit.

    This gives you direct access and is useful if you’re already using ARKit, familiar with anchors, or have more specific UI needs.

    Here’s a demo of how it works using an example status app that I’ve created.

    Here, you can see that I’m back with my project. This time, I’d like to monitor the pressure in the system while I work.

    I’ll open the inspector window of my status app and place it in front of the gauge.

    As you can see, the video feed of the gauge has appeared in my status app. Now I can return to work and keep an eye on the pressure while I work.

    Now let me show you how I added a Camera Region to my app in just a few lines of code using SwiftUI and the VisionKit API.

    First, I import VisionKit.

    I define a standard SwiftUI view. I’ve called it InspectorView. This will contain the camera region. The core of this view is CameraRegionView. I’m initializing it with the IsContrastAndVibrancyEnhancementEnabled parameter, passing true to enable stabilization with contrast and vibrancy enhancement.

    As I mentioned, this view needs to live in its own window because it uses the window's position to determine what part of the passthrough is processed. For that, let’s look at the App struct.

    Here’s my app struct. I have a main WindowGroup for my primary app content. I’ll make a second WindowGroup for the InspectorView.

    That’s enough to add a camera region to my app. But for more complex applications, CameraRegionView supports a closure. So I’m going to change my code to use this closure to analyze the camera images, and later, I may add a feature to save the images to a file.

    I’ll modify the CameraRegionView to accept a closure, allowing me to process each camera frame as it arrives.

    First, I add my cameraFeedDelivery class that I’ve made to capture camera frames and deliver them to the rest of my app.

    My closure will use the pixelBuffer from the CameraRegionView.

    Here, I’ll check for errors and pass the pixelBuffer to my cameraFeedDelivery class. My closure returns nil, which indicates that I’ve not modified the pixelBuffer. I could also use this closure for custom processing. If I modify the pixelBuffer and return it, then the CameraRegionView would render the adjusted camera image.

    So with just a few lines of code, I’ve added camera regions to my app. In my example, I enabled contrast and vibrancy enhancement. But the Camera Region APIs provides two built-in processing capabilities. First is image stabilization. This ensures that content remains anchored and stable during natural head movements. And second is contrast and vibrancy enhancement, which includes stabilization and optimizes for brightness and color representation.

    Now let’s look at ARKit’s API for camera regions. Perhaps your application would like a camera region associated with a particular 3D object. Or you'd like to place a camera region after recognizing a specific object in the environment. If your application needs this level of fine-grained control over anchors and 3D objects, this API provides the low-level primitives, and you'll need to define the anchors.

    In ARKit, your anchor defines a virtual window into the real world by specifying its transform and physical size in meters. This window defines an area where where you’ll see the direct, stabilized view of the passthrough camera feed.

    You can think of it like placing a virtual camera right there in your physical space. This virtual camera doesn’t need to be attached to a visionOS window. It can produce a feed of any location within view of Vision Pro’s cameras.

    Now let's take a closer look at the API.

    ARKit offers a new type of data provider called the CameraRegionProvider. Integrating camera regions follows a familiar ARKit pattern.

    I start by running a data provider on my ARKitSession, just like I would for other ARKit features. With the provider up and running, my next step is to pinpoint the area for a camera region.

    I do this by creating a CameraRegionAnchor and adding it to my provider. Think of these anchors as specifying the exact regions in the real world that I want for the virtual camera. As ARKit runs, the provider sends updates to these anchors. Each update comes with a new pixelBuffer. This buffer contains the stabilized view for that specific spatial region.

    So let’s dive into how I create one of these anchors.

    Creating a CameraRegionAnchor is straightforward. I define its position and orientation in the world using a standard 6-degree-of-freedom transform. Then I specify its physical size, its width, and height in meters. Together, these parameters define the real-world window for the camera region. I also need to tell ARKit if I want the window contrast enhanced or just stabilized. Then I add it to the CameraRegionProvider.

    After adding the anchor, I call anchorUpdates(forID:) and pass the anchor ID of the newAnchor. The camera feed now appears exactly at the location specified by the anchor, and my code can handle the pixelBuffers provided with each update.

    So that’s Camera Regions in ARKit, an incredibly useful tool for keeping track of specific areas in your environment. But before I leave the topic, there are some points I’d like you to keep in mind.

    The pass-through content in the CameraRegionView, like any SwiftUI view, can be zoomed or panned using standard techniques. If you implement these transformations, ensure they are also applied to any camera frames you save or transmit remotely.

    It’s important to understand that the enhancement algorithm dynamically adjusts its frame rate to deliver the best possible image quality. Choosing stabilization over contrast enhancement will result in a higher frame rate, as stabilization requires less processing power.

    And while Camera Regions in ARKit are powerful and allow regions of any size, it’s important to be mindful of resource usage. Larger camera regions will naturally have a greater impact on memory and processing.

    And finally, I strongly recommend you evaluate your overall resource use as you design your experience. Particularly when working with large enhanced regions. As a guideline, aim for CameraRegionAnchors to display passthrough content using about one-sixth or less of the overall visible area.

    So those are the topics designed to bridge your physical and digital worlds, and the last of a long list of enterprise-ready enhancements we’ve added this year. From making core functionality like UVC access and object tracking more flexible, to introducing Window Follow Mode, App-Protected Content, and Camera Regions. I'm sure you'll find a myriad of ways to put these new capabilities to work in your app.

    And with that, let me wrap up with some final guidance.

    First, be mindful of environmental safety. Ensure users are in a suitable location to perform tasks safely while wearing Vision Pro, especially when interacting with real-world equipment.

    Remember that with enhanced access, particularly to cameras and sensors, comes increased responsibility. Be transparent with users about what data is being accessed and why. Design your applications to collect only the necessary information for the task at hand, respecting user privacy in the workplace.

    Ensure your application and use case meet the eligibility requirements. These are intended for proprietary in-house apps developed for your own employees, or for custom B2B apps built for another business and distributed privately. And with those items confirmed, if eligible, only request the enterprise entitlements you genuinely need for your application’s specific functionality.

    And finally, please share your feedback with us. We rely on your input not only regarding these specific APIs, but also about the future capabilities you need to build amazing enterprise applications on visionOS.

    Thank you for watching and have a great WWDC!

    • 3:00 - createml on the Mac command line

      xcrun createml objecttracker -s my.usdz -o my.referenceobject
    • 4:28 - VisionEntitlementServices

      import VisionEntitlementServices
      
      func checkLicenseStatus() {
          // Get the shared license details instance
          let license = EnterpriseLicenseDetails.shared
      
          // First, you might check the overall license status
          guard license.licenseStatus == .valid else {
              print("Enterprise license is not valid: \(license.licenseStatus)")
              // Optionally disable enterprise features or alert the user
              return
          }
      
          // Then, check for a specific entitlement before using the feature
          if license.isApproved(for: .mainCameraAccess) {
              // Safe to proceed with using the main camera API
              print("Main Camera Access approved. Enabling feature...")
              // ... enable camera functionality ...
          } else {
              // Feature not approved for this license
              print("Main Camera Access not approved.")
              // ... keep feature disabled, potentially inform user ...
          }
      }
    • 10:04 - SharedCoordinateSpaceModel

      //
      //  SharedCoordinateSpaceModel.swift
      //
      
      import ARKit
      
      class SharedCoordinateSpaceModel {
          let arkitSession = ARKitSession()
          let sharedCoordinateSpace = SharedCoordinateSpaceProvider()
          let worldTracking = WorldTrackingProvider()
      
          func runARKitSession() async {
              do {
                  try await arkitSession.run([sharedCoordinateSpace, worldTracking])
              } catch {
                  reportError("Error: running session: \(error)")
              }
          }
      
          // Push data received from other participants
          func pushCoordinateSpaceData(_ data: Data) {
              if let coordinateSpaceData = SharedCoordinateSpaceProvider.CoordinateSpaceData(data: data) {
                  sharedCoordinateSpace.push(data: coordinateSpaceData)
              }
          }
      
          // Poll data to be sent to other participants
          func pollCoordinateSpaceData() async {
              if let coordinateSpaceData = sharedCoordinateSpace.nextCoordinateSpaceData {
                  // Send my coordinate space data
              }
          }
      
          // Be notified when participants connect or disconnect from the shared coordinate space
          func processEventUpdates() async {
              let participants = [UUID]()
              for await event in sharedCoordinateSpace.eventUpdates {
                  switch event {
                      // Participants changed
                  case .connectedParticipantIdentifiers(participants: participants):
                      // handle change
                      print("Handle change in participants")
                  case .sharingEnabled:
                      print("sharing enabled")
                  case .sharingDisabled:
                      print("sharing disabled")
                  @unknown default:
                      print("handle future events")
                  }
              }
          }
      
          // Be notified when able to add shared world anchors
          func processSharingAvailabilityUpdates() async {
              for await sharingAvailability in worldTracking.worldAnchorSharingAvailability
                  where sharingAvailability == .available {
                  // Able to add anchor
              }
          }
          // Add shared world anchor
          func addWorldAnchor(at transform: simd_float4x4) async throws {
              let anchor = WorldAnchor(originFromAnchorTransform: transform, sharedWithNearbyParticipants: true)
              try await worldTracking.addAnchor(anchor)
          }
      
          // Process shared anchor updates from local session and from other participants
          func processWorldTrackingUpdates() async {
              for await update in worldTracking.anchorUpdates {
                  switch update.event {
                  case .added, .updated, .removed:
                      // Handle anchor updates
                      print("Handle updates to shared world anchors")
                  }
              }
          }
      }
    • 12:50 - contentCaptureProtected

      // Example implementing contentCaptureProtected
      
      struct SecretDocumentView: View {
          var body: some View {
              VStack {
                  Text("Secrets")
                      .font(.largeTitle)
                      .padding()
      
                  SensitiveDataView()
                      .contentCaptureProtected()
              }
              .frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .top)
          }
      }
    • 16:48 - CameraRegionView

      //
      //  InspectorView.swift
      //
      
      import SwiftUI
      import VisionKit
      
      struct InspectorView: View {
          @Environment(CameraFeedDelivery.self) private var cameraFeedDelivery: CameraFeedDelivery
      
          var body: some View {
              CameraRegionView(isContrastAndVibrancyEnhancementEnabled: true) { result in
                  var pixelBuffer: CVReadOnlyPixelBuffer?
                  switch result {
                  case .success(let value):
                      pixelBuffer = value.pixelBuffer
                  case .failure(let error):
                      reportError("Failure: \(error.localizedDescription)")
                      cameraFeedDelivery.stopFeed()
                      return nil
                  }
      
                  cameraFeedDelivery.frameUpdate(pixelBuffer: pixelBuffer!)
                  return nil
              }
          }
      }
      
      @main
      struct EnterpriseAssistApp: App {
          var body: some Scene {
              WindowGroup {
                  ContentView()
              }
      
              WindowGroup(id: "InspectorView") {
                  InspectorView()
              }
              .windowResizability(.contentSize)
          }
      }
    • 21:15 - CameraRegionAnchor

      class CameraRegionHandler {
          let arkitSession = ARKitSession()
          var cameraRegionProvider: CameraRegionProvider?
          var cameraRegionAnchor: CameraRegionAnchor?
      
          func setUpNewAnchor(anchor: simd_float4x4, width: Float, height: Float) async {
              let anchor = CameraRegionAnchor(originFromAnchorTransform: anchor,
                                              width: width,
                                              height: height,
                                              cameraEnhancement: .stabilization)
      
              guard let cameraRegionProvider = self.cameraRegionProvider else {
                  reportError("Missing CameraRegionProvider")
                  return
              }
      
              do {
                  try await cameraRegionProvider.addAnchor(anchor)
              } catch {
                  reportError("Error adding anchor: \(error)")
              }
              cameraRegionAnchor = anchor
      
              Task {
                  let updates = cameraRegionProvider.anchorUpdates(forID: anchor.id)
                  for await update in updates {
                      let pixelBuffer = update.anchor.pixelBuffer
                      // handle pixelBuffer
                  }
              }
          }
      
          func removeAnchor() async {
              guard let cameraRegionProvider = self.cameraRegionProvider else {
                  reportError("Missing CameraRegionProvider")
                  return
              }
      
              if let cameraRegionAnchor = self.cameraRegionAnchor {
                  do {
                      try await cameraRegionProvider.removeAnchor(cameraRegionAnchor)
                  } catch {
                      reportError("Error removing anchor: \(error.localizedDescription)")
                      return
                  }
                  self.cameraRegionAnchor = nil
              }
          }
      }

Developer Footer

  • 视频
  • WWDC25
  • 了解面向空间商务 App 的增强功能
  • 打开菜单 关闭菜单
    • iOS
    • iPadOS
    • macOS
    • Apple tvOS
    • visionOS
    • watchOS
    打开菜单 关闭菜单
    • Swift
    • SwiftUI
    • Swift Playground
    • TestFlight
    • Xcode
    • Xcode Cloud
    • SF Symbols
    打开菜单 关闭菜单
    • 辅助功能
    • 配件
    • App 扩展
    • App Store
    • 音频与视频 (英文)
    • 增强现实
    • 设计
    • 分发
    • 教育
    • 字体 (英文)
    • 游戏
    • 健康与健身
    • App 内购买项目
    • 本地化
    • 地图与位置
    • 机器学习
    • 开源资源 (英文)
    • 安全性
    • Safari 浏览器与网页 (英文)
    打开菜单 关闭菜单
    • 完整文档 (英文)
    • 部分主题文档 (简体中文)
    • 教程
    • 下载 (英文)
    • 论坛 (英文)
    • 视频
    打开菜单 关闭菜单
    • 支持文档
    • 联系我们
    • 错误报告
    • 系统状态 (英文)
    打开菜单 关闭菜单
    • Apple 开发者
    • App Store Connect
    • 证书、标识符和描述文件 (英文)
    • 反馈助理
    打开菜单 关闭菜单
    • Apple Developer Program
    • Apple Developer Enterprise Program
    • App Store Small Business Program
    • MFi Program (英文)
    • News Partner Program (英文)
    • Video Partner Program (英文)
    • 安全赏金计划 (英文)
    • Security Research Device Program (英文)
    打开菜单 关闭菜单
    • 与 Apple 会面交流
    • Apple Developer Center
    • App Store 大奖 (英文)
    • Apple 设计大奖
    • Apple Developer Academies (英文)
    • WWDC
    获取 Apple Developer App。
    版权所有 © 2025 Apple Inc. 保留所有权利。
    使用条款 隐私政策 协议和准则