337p人体粉嫩胞高清图片,97人妻精品一区二区三区在线 ,日本少妇自慰免费完整版,99精品国产福久久久久久,久久精品国产亚洲av热一区,国产aaaaaa一级毛片,国产99久久九九精品无码,久久精品国产亚洲AV成人公司
網易首頁 > 網易號 > 正文 申請入駐

Meta公司:DINOv3是以前所未有的規模進行視覺自我監督學習

0
分享至



Meta公司網站原文,請享用,
譚老師我看完感慨一句:性能確實很棒,但 Apache 許可證已改商業許可證了。換句話說,原來可以免費使用、修改甚至商用的 Apache 許可證被換成了需要付費或受更多限制的商業許可證,想繼續用就得按新規矩來。



Open Source 開源

DINOv3: Self-supervised learning for vision at unprecedented scaleDINOv3:以前所未有的規模進行視覺自我監督學習

August 14, 2025 2025年8月14日

Takeaways: 要點:

  • We’re introducing DINOv3, which scales self-supervised learning for images to create universal vision backbones that achieve absolute state-of-the-art performance across diverse domains, including web and satellite imagery.
    我們正在推出 DINOv3,它擴展圖像的自監督學習,以創建通用視覺主干,從而在不同領域(包括網絡和衛星圖像)實現絕對最先進的性能。
  • DINOv3 backbones produce powerful, high-resolution image features that make it easy to train lightweight adapters. This leads to exceptional performance on a broad array of downstream vision tasks, including image classification, semantic segmentation, and object tracking in video.
    DINOv3 主干可生成強大的高分辨率圖像功能,使訓練輕量級適配器變得容易。這導致了廣泛的下游視覺任務的卓越性能,包括圖像分類,語義分割和視頻中的對象跟蹤。
  • We’ve incorporated valuable community feedback, enhancing the versatility of DINOv3 by shipping smaller models that outperform comparable CLIP-based derivatives across a broad evaluation suite, as well as alternative ConvNeXt architectures for resource-constrained use cases.
    我們已經整合了寶貴的社區反饋,通過在廣泛的評估套件中提供比基于 CLIP 的衍生產品性能更好的小型模型,以及用于資源受限用例的替代 ConvNeXt 架構,增強了 DINOv3 的多功能性。
  • We’re releasing the DINOv3 training code and pre-trained backbones under a commercial license to help drive innovation and advancements in the computer vision and multimodal ecosystem.
    我們將在商業許可證下發布 DINOv 3 訓練代碼和預先訓練的骨干 ,以幫助推動計算機視覺和多模式生態系統的創新和進步。

Self-supervised learning (SSL) —the concept that AI models can learn independently without human supervision—has emerged as the dominant paradigm in modern machine learning. It has driven the rise of large language models that acquire universal representations by pre-training on massive text corpora. However, progress in computer vision has lagged behind, as the most powerful image encoding models still rely heavily on human-generated metadata, such as web captions, for training.
自監督學習(SSL)-AI 模型可以在沒有人類監督的情況下獨立學習的概念-已成為現代機器學習的主導范式。它推動了大型語言模型的興起,這些模型通過在大量文本語料庫上進行預訓練來獲得通用表示。然而,計算機視覺的進展卻落后了,因為最強大的圖像編碼模型仍然嚴重依賴于人類生成的元數據,例如網絡標題。

Today, we’re releasing DINOv3, a generalist, state-of-the-art computer vision model trained with SSL that produces superior high-resolution visual features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks including object detection and semantic segmentation.
今天,我們發布了 DINOv3,這是一個通用的、最先進的計算機視覺模型,使用 SSL 進行訓練,可以產生上級高分辨率的視覺特征。這是第一次,單一的凍結視覺骨干在多個長期存在的密集預測任務(包括對象檢測和語義分割)上的表現優于專業解決方案。



DINOv3’s breakthrough performance is driven by innovative SSL techniques that eliminate the need for labeled data—drastically reducing the time and resources required for training and enabling us to scale training data to 1.7B images and model size to 7B parameters. This label-free approach enables applications where annotations are scarce, costly, or impossible.

For example, our research shows that DINOv3 backbones pre-trained on satellite imagery achieve exceptional performance on downstream tasks such as canopy height estimation.
DINOv3 的突破性性能是由創新的 SSL 技術驅動的,該技術消除了對標記數據的需求,大大減少了訓練所需的時間和資源,使我們能夠將訓練數據擴展到 1.7 B 圖像,并將模型大小擴展到 7 B 參數。這種無標簽的方法使應用程序能夠在注釋稀缺、昂貴或不可能的情況下使用。例如,我們的研究表明,在衛星圖像上預訓練的 DINOv3 骨干在下游任務(如冠層高度估計)上實現了卓越的性能。

We believe DINOv3 will help accelerate existing use cases and also unlock new ones, leading to advancements in industries such as healthcare, environmental monitoring, autonomous vehicles, retail, and manufacturing—enabling more accurate and efficient visual understanding at scale.
我們相信,DINOv3 將有助于加速現有的用例,并解鎖新的用例,從而推動醫療保健、環境監測、自動駕駛汽車、零售和制造等行業的進步,從而實現更準確、更高效的大規模視覺理解。

We’re releasing DINOv3 with a comprehensive suite of open sourced backbones under a commercial license, including a satellite backbone trained on MAXAR imagery. We’re also sharing a subset of our downstream evaluation heads, enabling the community to reproduce our results and build upon them. Additionally, we’re providing sample notebooks so the community has detailed documentation to help them start building with DINOv3 today.
我們將在商業許可下發布 DINOv3,其中包含一套全面的開源主干,包括一個在 MAXAR 圖像上訓練的衛星主干。我們還共享了下游評估負責人的子集,使社區能夠復制我們的結果并在此基礎上進行構建。此外,我們還提供了示例筆記本,以便社區擁有詳細的文檔,幫助他們立即開始使用 DINOv3 進行構建。

Unlocking high-impact applications with self-supervised learning
通過自我監督學習解鎖高影響力的應用程序

DINOv3 achieves a new milestone by demonstrating, for the first time, that SSL models can outperform their weakly supervised counterparts across a wide range of tasks.

While previous DINO models set a significant lead in dense prediction tasks, such as segmentation and monocular depth estimation, DINOv3 surpasses these accomplishments.

Our models match or exceed the performance of the strongest recent models such as SigLIP 2 and Perception Encoder on many image classification benchmarks, and at the same time, they drastically widen the performance gap for dense prediction tasks.


DINOv3 實現了一個新的里程碑,首次證明 SSL 模型可以在廣泛的任務中優于弱監督模型。雖然以前的 DINO 模型在密集預測任務(如分割和單目深度估計)方面取得了顯著領先,但 DINOv3 超越了這些成就。我們的模型在許多圖像分類基準測試中的性能與最近最強的模型(如 SigLIP 2 和 Perception Encoder)相匹配或超過,同時,它們大大擴大了密集預測任務的性能差距。



DINOv3 builds on the breakthrough DINO algorithm, requiring no metadata input, consuming only a fraction of the training compute compared to prior methods, and still delivering exceptionally strong vision foundation models.

The novel refinements introduced in DINOv3 lead to state-of-the-art performance on competitive downstream tasks such as object detection under the severe constraint of frozen weights. This eliminates the need for researchers and developers to fine-tune the model for specific tasks, enabling broader and more efficient application.
DINOv3 建立在突破性的 DINO 算法之上,不需要元數據輸入,與以前的方法相比,只消耗一小部分訓練計算,并且仍然提供非常強大的視覺基礎模型。DINOv3 中引入的新改進導致競爭性下游任務的最新性能,例如在凍結權重的嚴格約束下的對象檢測。這消除了研究人員和開發人員為特定任務微調模型的需要,從而實現更廣泛和更有效的應用。



Finally, because the DINO approach is not specifically tailored to any image modality, the same algorithm can be applied beyond web imagery to other domains where labeling is prohibitively difficult or expensive. DINOv2 already leverages vast amounts of unlabeled data to support diagnostic and research efforts in histology, endoscopy, and medical imaging. In satellite and aerial imagery, the overwhelming volume and complexity of data make manual labeling impractical.

With DINOv3, we make it possible for these rich datasets to be used to train a single backbone that can then be used across satellite types, enabling general applications in environmental monitoring, urban planning, and disaster response.
最后,由于 DINO 方法不是專門針對任何圖像模態定制的,因此相同的算法可以應用于 Web 圖像之外的其他領域,這些領域的標記非常困難或昂貴。DINOv2 已經利用大量未標記的數據來支持組織學 、 內窺鏡檢查和醫學成像方面的診斷和研究工作。在衛星和航空圖像中,數據的巨大數量和復雜性使得手動標記不切實際。通過 DINOv3,我們可以使用這些豐富的數據集來訓練單個骨干,然后可以跨衛星類型使用,從而實現環境監測,城市規劃和災害響應中的一般應用。

DINOv3 is already having real-world impact.

The World Resources Institute (WRI) is using our latest model to monitor deforestation and support restoration, helping local groups protect vulnerable ecosystems. WRI uses DINOv3 to analyze satellite images and detect tree loss and land-use changes in affected ecosystems. The accuracy gains from DINOv3 support automating climate finance payments by verifying restoration outcomes, reducing transaction costs, and accelerating funding to small, local groups.

For example, compared to DINOv2, DINOv3 trained on satellite and aerial imagery reduces the average error in measuring tree canopy height in a region of Kenya from 4.1 meters to 1.2 meters. WRI is now able to scale support for thousands of farmers and conservation projects more efficiently.


DINOv3 已經對現實世界產生了影響。 世界資源研究所 (WRI)正在使用我們的最新模型來監測森林砍伐和支持恢復,幫助當地團體保護脆弱的生態系統。世界資源研究所使用 DINOv3 分析衛星圖像,并檢測受影響生態系統中的樹木損失和土地使用變化。

DINOv3 帶來的準確性收益通過驗證恢復結果、降低交易成本和加速向小型地方團體提供資金,支持氣候融資支付的自動化。例如,與 DINOv2 相比,在衛星和航空圖像上訓練的 DINOv3 將測量肯尼亞地區樹冠高度的平均誤差從 4.1 米降低到 1.2 米。世界資源研究所現在能夠更有效地擴大對數千名農民和保護項目的支持。

Scalable and efficient visual modeling without fine-tuning
可擴展且高效的可視化建模,無需微調

We built DINOv3 by training a 7x larger model on a 12x larger dataset than its predecessor, DINOv2. To showcase the model’s versatility, we evaluate it across 15 diverse visual tasks and more than 60 benchmarks. The DINOv3 backbone particularly shines on all dense prediction tasks, showing an exceptional understanding of the scene layout and underlying physics.
我們通過在比其前身 DINOv2 大 12 倍的數據集上訓練 7 倍大的模型來構建 DINOv3。為了展示該模型的多功能性,我們在 15 個不同的視覺任務和 60 多個基準測試中對其進行了評估。DINOv3 主干在所有密集預測任務中表現出色,表現出對場景布局和底層物理的卓越理解。

The rich, dense features capture measurable attributes or characteristics of each pixel in an image and are represented as vectors of floating-point numbers. These features are capable of parsing objects into finer parts, even generalizing across instances and categories. This dense representation power makes it easy to train lightweight adapters with minimal annotations on top of DINOv3, meaning a few annotations and a linear model are sufficient to obtain robust dense predictions.

Pushing things further and using a more sophisticated decoder, we show that it’s possible to achieve state-of-the-art performance on long-standing core computer vision tasks without fine-tuning the backbone.

We show such results on object detection, semantic segmentation, and relative depth estimation.
豐富、密集的特征捕捉圖像中每個像素的可測量屬性或特征,并表示為浮點數向量。這些功能能夠將對象解析為更精細的部分,甚至跨實例和類別進行概括。這種密集表示能力使得在 DINOv3 之上使用最少的注釋來訓練輕量級適配器變得很容易,這意味著一些注釋和線性模型就足以獲得強大的密集預測。通過進一步推進并使用更復雜的解碼器,我們證明了在無需微調主干的情況下,可以在長期的核心計算機視覺任務上實現最先進的性能。我們展示了這樣的結果,對象檢測,語義分割和相對深度估計。

Because state-of-the-art results can be achieved without fine-tuning the backbone, a single forward pass can serve multiple applications simultaneously.

This enables the inference cost of the backbone to be shared across tasks, which is especially critical for edge applications that often require running many predictions at once.

DINOv3’s versatility and efficiency make it the perfect candidate for such deployment scenarios, as demonstrated by NASA’s Jet Propulsion Laboratory (JPL), which is already using DINOv2 to build exploration robots for Mars, enabling multiple vision tasks with minimal compute.
由于無需微調主干即可實現最先進的結果,因此單個前向通道可以同時服務于多個應用。這使得骨干網的推理成本能夠在任務之間共享,這對于經常需要同時運行許多預測的邊緣應用程序尤其重要。DINOv3 的多功能性和效率使其成為此類部署場景的完美候選者,正如 NASA 噴氣推進實驗室 (JPL)所證明的那樣,該實驗室已經使用 DINOv2 為火星建造探測機器人,以最小的計算實現多個視覺任務。

A family of deployment-friendly models一系列部署友好型模型

Scaling DINOv3 to 7B parameters shows SSL’s full potential. However, a 7B model is impractical for many downstream applications. Following feedback from the community, we built a family of models spanning a large range of inference compute requirements to empower researchers and developers across diverse use cases.

By distilling the ViT-7B model into smaller, high-performing variants like ViT-B and ViT-L, DINOv3 outperforms comparable CLIP-based models across a broad evaluation suite.

Additionally, we introduce alternative ConvNeXt architectures (T, S, B, L) distilled from ViT-7B, that can accommodate varying compute constraints. We’re also releasing our distillation pipeline to enable the community to build upon this foundation.
將 DINOv 3 參數擴展到 7 B 顯示了 SSL 的全部潛力。然而,7 B 模型對于許多下游應用是不切實際的。根據社區的反饋,我們構建了一系列涵蓋大量推理計算需求的模型,以支持研究人員和開發人員跨各種用例。通過將 ViT-7 B 模型提煉成更小的高性能變體,如 ViT-B 和 ViT-L,DINOv 3 在廣泛的評估套件中優于基于 CLIP 的同類模型。此外,我們介紹了替代 ConvNeXt 架構(T,S,B,L)從 ViT-7 B,可以適應不同的計算約束。我們還發布了我們的蒸餾管道,以使社區能夠在此基礎上再接再厲。





聲明:個人原創,僅供參考

特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關推薦
熱點推薦
伊朗發出最后通牒!俄通告全球將參戰,法國上將:中估計也要到了

伊朗發出最后通牒!俄通告全球將參戰,法國上將:中估計也要到了

小蘭聊歷史
2026-04-03 15:17:39
黃磊二女兒罕露面,12歲穿露腰裝太成熟,眉眼比多多更像孫莉

黃磊二女兒罕露面,12歲穿露腰裝太成熟,眉眼比多多更像孫莉

洲洲影視娛評
2026-04-04 16:19:13
想撤都晚了!伊朗亮萬枚家底,美軍嚇得光身逃跑,三面死圍以色列

想撤都晚了!伊朗亮萬枚家底,美軍嚇得光身逃跑,三面死圍以色列

梁蜱愛玩車
2026-04-04 10:21:06
第二個惡魔醫生被抓,鄭大一附院王福建為94名患者植入不需要器械

第二個惡魔醫生被抓,鄭大一附院王福建為94名患者植入不需要器械

大魚簡科
2026-02-18 22:03:00
謝霆鋒出席講座時,佩戴與王菲26年前熱戀期定情手鐲,手鐲保存如新

謝霆鋒出席講座時,佩戴與王菲26年前熱戀期定情手鐲,手鐲保存如新

科學發掘
2026-04-04 14:35:53
拋棄中國,伊朗為何選擇日本作為中間調停者

拋棄中國,伊朗為何選擇日本作為中間調停者

民間胡扯老哥
2026-04-03 02:20:49
有線耳機被淘汰快10年突然翻紅,銷量暴漲20%,“有的上架三天被搶空”

有線耳機被淘汰快10年突然翻紅,銷量暴漲20%,“有的上架三天被搶空”

環球網資訊
2026-04-02 08:50:30
遼寧一女租客,因“虎皮蘭客廳”走紅,那叫一個高級,超治愈

遼寧一女租客,因“虎皮蘭客廳”走紅,那叫一個高級,超治愈

手工制作阿愛
2026-04-04 14:37:57
6月1日起全國統一執行!車管所黃牛慌了,車主的好日子要來了!

6月1日起全國統一執行!車管所黃牛慌了,車主的好日子要來了!

小李子體育
2026-04-03 10:14:06
喪心病狂!珠峰腳下驚天騙局:導游給游客食物下藥,3年騙2000萬

喪心病狂!珠峰腳下驚天騙局:導游給游客食物下藥,3年騙2000萬

天氣觀察站
2026-04-03 14:01:29
重磅!土木工程學院被正式撤銷!!

重磅!土木工程學院被正式撤銷!!

新浪財經
2026-04-03 21:40:45
江蘇空姐嫁大17歲頭等艙乘客,婚后贈母上海房產

江蘇空姐嫁大17歲頭等艙乘客,婚后贈母上海房產

張例喜歡軟軟糯糯
2026-03-30 12:29:27
48小時紅線!中國提前全球通告:敢給戰爭開綠燈,直接一票否決

48小時紅線!中國提前全球通告:敢給戰爭開綠燈,直接一票否決

策前論
2026-04-04 17:12:15
外媒曝伍茲私人飛機降落在蘇黎世 48歲女友瓦妮莎主導境外秘密治療

外媒曝伍茲私人飛機降落在蘇黎世 48歲女友瓦妮莎主導境外秘密治療

勁爆體壇
2026-04-04 06:50:04
你們都是什么時候對男女之事開竅的?網友:果然還是攔不住有心人

你們都是什么時候對男女之事開竅的?網友:果然還是攔不住有心人

夜深愛雜談
2026-02-21 21:37:02
常態化防止返貧致貧 一系列就業幫扶舉措來了

常態化防止返貧致貧 一系列就業幫扶舉措來了

新京報
2026-04-03 16:18:01
A股一周20大熊股出爐:最熊股復牌后大跌80%,電力概念股領跌

A股一周20大熊股出爐:最熊股復牌后大跌80%,電力概念股領跌

21世紀經濟報道
2026-04-04 09:03:46
一架美國F-16戰斗機將在沙特緊急降落

一架美國F-16戰斗機將在沙特緊急降落

財聯社
2026-04-04 00:40:06
局勢再度升級!首艘開往中國的油輪遭到襲擊,是誤傷還是警告

局勢再度升級!首艘開往中國的油輪遭到襲擊,是誤傷還是警告

鐵錘簡科
2026-04-03 15:20:07
1979年,越南老百姓發現一怪象:中國軍隊撤軍時,專炸水泥電線桿

1979年,越南老百姓發現一怪象:中國軍隊撤軍時,專炸水泥電線桿

百年歷史老號
2026-03-29 01:40:42
2026-04-04 19:00:49
親愛的數據 incentive-icons
親愛的數據
《我看見了風暴:人工智能基建革命》一書作者
693文章數 219913關注度
往期回顧 全部

科技要聞

內存一年漲四倍!國產手機廠商集體漲價

頭條要聞

不邀請中國參加G7峰會 馬克龍又改主意了

頭條要聞

不邀請中國參加G7峰會 馬克龍又改主意了

體育要聞

剎不住的泰格·伍茲,口袋里的兩粒藥丸

娛樂要聞

闞清子口碑贏了!全開麥跑調拒絕重唱

財經要聞

中微董事長,給半導體潑點冷水

汽車要聞

17萬級海豹07EV 不僅續航長還有9分鐘滿電的快樂

態度原創

數碼
游戲
旅游
健康
時尚

數碼要聞

別被廠商洗腦了!彩色墨水屏全網最強指南:這樣買不踩坑

《GTA6》要學這款大作!前開發者爆料:不學才意外

旅游要聞

三十七載梨花會|陽信“花式”引客來 文旅融合擦亮“梨鄉”金字招牌

干細胞抗衰4大誤區,90%的人都中招

好養眼啊!大家快收下這份春日片單

無障礙瀏覽 進入關懷版