The following query
```sql
SELECT spree_orders . *
FROM spree_orders
WHERE spree_orders . user_id = ?
AND spree_orders . completed_at IS ?
AND spree_orders . created_by_id = ?
ORDER BY created_at DESC LIMIT ?
```
performs quite badly even though LIMIT is always 1 because:
* ORDER BY requires sorting by a column which is not indexed therefore
a sequential scan is performed.
* Although `completed_at` is indexed, `user_id` and `created_by_id` are
not causing a sequential scan.
To make it worse this query is executed very often in the following
controllers among others also related to checkout:
* CartController#populate
* EnterprisesController#Shop
* LineItemsController#bought
* ShopController#products
* ShopController#order_cycle
In some cases this query alone accounts for 66.8% of the total time
of the endpoint.
Results
See by yourself. We move from 56.643ms to 0.077ms. Pretty neat.
```
openfoodnetwork=> explain analyze SELECT "spree_orders".* FROM "spree_orders" WHERE "spree_orders"."user_id" = 1 AND "spree_orders"."completed_at" IS NULL AND "spree_orders"."created_by_id" = 1 ORDER BY created_at DESC LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Limit (cost=11753.03..11753.04 rows=1 width=195) (actual time=56.580..56.580 rows=0 loops=1)
-> Sort (cost=11753.03..11753.04 rows=1 width=195) (actual time=56.578..56.578 rows=0 loops=1)
Sort Key: created_at DESC
Sort Method: quicksort Memory: 25kB
-> Seq Scan on spree_orders (cost=0.00..11753.02 rows=1 width=195) (actual time=56.571..56.571 rows=0 loops=1)
Filter: ((completed_at IS NULL) AND (user_id = 1) AND (created_by_id = 1))
Rows Removed by Filter: 256135
Planning time: 0.252 ms
Execution time: 56.643 ms
(9 rows)
openfoodnetwork=> CREATE INDEX ON spree_orders (completed_at, user_id, created_by_id, created_at);
CREATE INDEX
openfoodnetwork=> explain analyze SELECT "spree_orders".* FROM "spree_orders" WHERE "spree_orders"."user_id" = 1 AND "spree_orders"."completed_at" IS NULL AND "spree_orders"."created_by_id" = 1 ORDER BY created_at DESC LIMIT 1;
mit (cost=8.45..8.46 rows=1 width=195) (actual time=0.030..0.030 rows=0 loops=1)
-> Sort (cost=8.45..8.46 rows=1 width=195) (actual time=0.029..0.029 rows=0 loops=1)
Sort Key: created_at DESC
Sort Method: quicksort Memory: 25kB
-> Index Scan using spree_orders_completed_at_user_id_created_by_id_created_at_idx on spree_orders (cost=0.42..8.44 rows=1 width=195) (actual time=0.021..0.021 rows=0 loops=1)
Index Cond: ((completed_at IS NULL) AND (user_id = 1) AND (created_by_id = 1))
Planning time: 0.199 ms
Execution time: 0.077 ms
```
This is required because when the default stock location is created, the backorderable_default column doesnt exist and when this column is created, the initial default is true. This is why we need to force it to false here. This column is the default value for on_demand which must be false.
* Add new attribute instagram_placeholder to local yml
* Add placeholder to instagram link input
* Add migration to fix wrong instagram links
* Add new attribute facebook_placeholder to local yml
* Add placeholder to facebook link input
* Add new attribute linkedin_placeholder to local yml
* Add placeholder to linkedin link input
* Change each for find_each in migration
* Add new attribute instagram_placeholder to local yml
* Add migration to fix wrong instagram links
* Add new attribute linkedin_placeholder to local yml
* Add placeholder to linkedin link input
Using Marshal.dump on the French production database raised an error:
Encoding::UndefinedConversionError: "\xC3" from ASCII-8BIT to UTF-8
Replacing Marshal with YAML solves the problem. It is also more reliable
and human readable.
This code was run against the French, Australian and UK production
data successfully.
When we introduced the Customer model, we didn't associate any existing
customers with users that have the same email address.
Later we decided to create that association when users sign up. But we didn't
update all the existing customers. We do that now for data consistency and to
solve several bugs.