Recently we came across a customer that needed to process more than 10 million lines journal batch from FAH (XLA) into General Ledger.
Production environment was 4 CPU and 16gb ram appltier and 8 cpu and 16 gb ram database tier Intel Xeon 3.5 Ghz servers. At a glance it seems to be a small server and it is, but actual problem wasn’t server resources but usage of those resources. A Journal Import concurrent requests runs using one CPU core and one only. Bigger the batch, longer the processing and sometimes also produces memory shortage.
We provided a logical solution to process splitting big Journal Batches into smaller ones and run them all in parallel.
The main idea was to get smaller and of course balanced journal batches so we could use all CPU cores.
The solution decreased from 10 hours to 30 minutes.