Bulk updating gem source

The Bulk API does not accept input records in the URI, so this should not be affected by your actual Bulk API batch size as set by the gem.

The Record Locking Cheat Sheet is extremely helpful in identifying such cases.It's a PDF; on the second page you'll see Basically, it sounds like you're combining three things that are high-risk for lock contention (parent-child data skew, large volume data loads, and a data model with a predilection for parent record locks) and you got it in spades.Mongoid aimed to provide a similar API to Active Record, and it’s largely due to this gem that Mongo DB had adoption in the Ruby (i.e. Mongoid then evolved through several releases and eventually came to use its own driver, Over the course of time, the Mongo DB server became increasingly rich in functionality and the list of driver requirements grew.Moped subsequently fell behind in feature completion and the Ruby community became stuck; Rails and Ruby users had to choose between using the most recent version of Mongoid and a feature incomplete driver (moped), or an older version of Mongoid and the most recent official Mongo DB Inc. Durran and the Ruby team decided that the best solution would be to officially work together on building a new driver and getting Mongoid version = 5.0 to use it.So as upshot, what I'd recommend you do is rather than trying to implement batching yourself, just pass the full list of records to update to the , but start out with the default and see how it goes.

It's not clear why you're receiving "URI too long" errors when you change your Ruby batch size.Additionally, you're unnecessarily burning through your 10,000 job limit on use of the Bulk API.If you dig into the gem, what's happening is this: You call def update(sobject, records, get_response = false, send_nulls = false, no_null_list = [], batch_size = 10000, timeout = 1500) do_operation('update', sobject, records, nil, get_response, timeout, batch_size, send_nulls, no_null_list) end long and submits each segment as a batch to the Bulk API.The mode of operation of the Bulk API is that you open a job, submit (usually quite large) batches of records against that job, and then kick it off.The Bulk API then goes off and does its thing, in parallel or serial mode as configured.This is to prevent any other process/operation to cause any conflicting updates on the same record.