There were a few changes needed:
* Plugins are now specified through `plugin:` config keyword.
* All plugin gems need to be specified explicitly in Gemfile since they
are no longer dependencies of plugins already specified explicitly.
* All plugin gems need to be updated in other to use the new APIs.
* One cop was renamed.
* New offenses safe to correct were corrected directly with `bundle exec
rubocop -a`.
* New offenses unsafe to correct were added to the TODO configuration
with `bundle exec rubocop --auto-gen-config --auto-gen-only-exclude
--exclude-limit 1400 --no-auto-gen-timestamp`.
I was hoping to reduce the query count but it stayed the same. In fact,
I'm expecting the query count to be higher with this version. The
DfcCatalogImporter queries all variants and links at once while the OC
job is not pre-loading the variants at the moment. We can optimise the
job though.
If we kept the old version and there were multiple catalogs per variant
then we would call the importer with the reset multiple times. The job
is iterating through each link only once though. So depending on the
ratio of catalogs to variants, I'm not sure which version would be more
efficient.
No version is properly dealing with the edge-case of multiple catalogs
per variant anyway. Imagine there are two catalogs with different stock
levels. Which one do we choose? Or do we add up?
And if the variant disappears from one catalog we still want to sell it
through the other. But depending on the order of processing, we may
reset the variant if it's missing in the last catalog. But let's worry
about that when it actually happens. Maybe it will be better to restrict
variants to one catalog.
This name better reflects what it's doing.
As this job is scheduled automatically by Sidekiq, I think there shouldn't be any jobs with the old name in redis. So I didn't bother keeping a placholder for the old name.
And Clean up unused include
After 10 minutes, I'd consider that it failed to open the order cycle. Who would want their products to sync, or get a notification at a random time during the order cycle?
Best viewed with whitespace ignored.
Being based on the DB value should be more robust.
This prevents an order cycle from being "opened" when it's already open. But note that the order cycle can become "unopened" (see OrderCycle#reset_opened_at).
Nice to see the database query count drop, but I must confess I don't know why!
There might be a delay before it gets sent, so it's better to record the time the event occurred at.
It would have been simpler to just add it to the data hash, but I felt it was an important detail for an event and should be at the top level along with event name.
In the case of order cycle opening, this is the same as opened_at. I've included this in the payload for clarity too.
Admins may want to pre-process some orders manually for going public.
And it's good to reserve stock for these.
At some point in the future, the supplier may have an order cycle with
its own times but the current FDC implementation allows orders at any
time.
Update every single order line to reflect local orders and stock levels.
New cases supported:
* Add lines for orders created by an admin.
* Create backorder line after increase of local line item quantity.
* Adjust local stock after variant has been removed from order cycle.
I still think that some of these objects won't be visible in Bugsnag but
I don't want to test any more on cases that were broken before and may
not be relevant now.
Bugsnag expects a string as value, not an ActiveRecord object. The
result was just "filtered" data, not showing any of the order's
attributes.
Since wanting to share the order data seems a common pattern, I added a
convenience method for it.
The new job class blends code from the BackorderJob and the
CompleteBackorderJob for the specific case of adjusting quantities after
an order has been cancelled.
I would like to write a more general class which can be used for any
order amendmends but this was the quickest solution to cater for
currently running pilots.
This will delay the checkout request by a few seconds if there's stock
to sync. But we minimise the chance of missing reduced stock from orders
on another platform.
We still have a gap between the checkout and placing a backorder. In
that time we can't guarantee enough stock. But let's tackle that after
the pilot.