![]()
Often times, "Stream vs. Batch" is discussed as if it’s oneorthe other, but to me this does not make that much sense really.
很多時候,“Stream vs. Batch”
被討論為非此即彼,但對我來說,這并沒有多大意義。
Many streaming systems will apply batching too, i.e. processing or transferring multiple records (a "batch") at once, thus offsetting connection overhead, amortizing the cost of fanning out work to multiple threads, opening the door for highly efficient SIMD processing, etc., all to ensure high performance. The prevailing trend towards storage/compute separation in data streaming and processing architectures (for instance, thinking of platforms such as WarpStream, andDiskless Kafkaat large) further accelerates this development.
許多流系統也將應用批處理,即一次處理或傳輸多條記錄(“批處理”),從而抵消連接開銷,將工作扇出的成本分攤到多個線程,為高效的 SIMD 處理打開大門等,所有這些都是為了確保高性能。數據流和處理架構中存儲/計算分離的普遍趨勢(例如,考慮 WarpStream 和整個無盤 Kafka等平臺)進一步加速了這一發展。
Typically, this is happening transparently to users, done in an opportunistic way: handling all of those records (up to some limit) which have arrived in a buffer since the last batch. This makes for a very nice self-regulating system. High arrival rate of records: larger batches, improving throughput. Low arrival rate: smaller batches, perhaps with even just a single record, ensuring low latency. Columnar in-memory data formats likeApache Arroware of great help for implementing such a design.
通常,這對用戶是透明的,以機會主義的方式完成:處理自上一批以來到達緩沖區的所有這些記錄(最多達到某個限制)。這形成了一個非常好的自我調節系統。記錄到達率高:批次更大,提高吞吐量。低到達率:較小的批次,甚至可能只有一條記錄,確保低延遲。像Apache Arrow這樣的列式內存數據格式對于實現這樣的設計有很大幫助。
In contrast, what the "Stream vs. Batch" discussion in my opinion should actually be about, are "Pull vs. Push" semantics: will the system query its sources for new records in a fixed interval, or will new records be pushed to the system as soon as possible? Now, no matter how often you pull, you can’t convert a pull-based solution into a streaming one. Unless a source represents a consumable stream of changes itself (you see where this is going), a pull system may miss updates happening between fetch attempts, as well as deletes.
相比之下,在我看來,“Stream vs. Batch”的討論實際上應該是關于“Pull vs. Push”語義:系統會在固定的時間間隔內查詢其源以獲取新記錄,還是會盡快將新記錄推送到系統?現在,無論您多久拉取一次,都無法將基于拉取的解決方案轉換為流式解決方案。除非源本身代表可消費的更改流(您知道這是怎么回事),否則拉取系統可能會錯過在獲取嘗試和刪除之間發生的更新。
This is what makes streaming so interesting and powerful: it provides you with a complete view of your data in real-time. A streaming system lets you put your data to thelocationwhere you need it, in theformatyou need it, and in theshapeyou need it (think denormalization), immediately as it gets produced or updated. The price for this is a potentially higher complexity, for example when reasoning about streaming joins (and their state), or handling out-of-order data. But the streaming community is working continuously to improve things here, e.g. via disaggregated state backends, transactional stream processing, and much more. I’m really excited about all the innovation happening in this space right now.
這就是流式處理如此有趣和強大的原因:它為您提供實時數據的完整視圖。流系統允許您將數據放在需要的位置、所需的格式和形狀(想想非規范化),在數據生成或更新時立即。這樣做的代價是可能更高的復雜性,例如,在推理流式連接(及其狀態)或處理無序數據時。但是流社區正在不斷努力改進這里的事情,例如通過分解的狀態后端、事務流處理等等。我對這個領域現在發生的所有創新感到非常興奮。
Now, you might wonder: "Do I really need streaming(push), though? I’m fine with batch(pull)."
現在,您可能會想:“不過,我真的需要流式處理(push)嗎?我對批處理(拉)沒問題。
That’s a common and fair question. In my experience, it is best answered by giving it a try yourself. Again and again I have seen how folks who were skeptical at first, very quickly wanted to get real-time streaming for more and more, if not all of their use cases, once they had seen it in action once. If you’ve experienced a data freshness of a second or two in your data warehouse, you don’t want to ever miss this magic again.
這是一個常見且公平的問題。根據我的經驗,最好自己試一試來回答。我一次又一次地看到,起初持懷疑態度的人們,一旦他們曾經看到過實時流,他們很快就希望為越來越多的用例(如果不是全部)獲得實時流。如果您在數據倉庫中體驗過一兩秒的數據新鮮度,那么您肯定不想再錯過這種神奇之處。
All that being said, it’s actually not even about pullorpush so much—
the approaches complement each other. For instance, backfills often are done via batching, i.e. querying, in an otherwise streaming-based system. Also, if you want the completeness of streaming but don’t require a super low latency, you may decide to suspend your streaming pipelines (thus saving cost) in times of low data volume, resume when there’s new data to process, and halt again.
話雖如此,實際上甚至與拉或推無關——這些方法是相輔相成的。例如,回填通常是通過批處理(即查詢)在其他基于流的系統中完成的。此外,如果您想要流式處理的完整性,但不需要超低延遲,則可以決定在數據量較低時暫停流式處理管道(從而節省成本),在有新數據要處理時恢復,然后再次停止。
Batch streaming, if you will.
批量流式處理(如果愿意)。
![]()
特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.