Here's a sample zzdump custom function for DTL:

Class Utils.Functions Extends Ens.Util.FunctionSet
{

/// w ##class(Utils.Functions).ZZDUMP("abc")
ClassMethod ZZDUMP(var) As %String
{
	set id = $random(100000000000000000)
	while $d(^ISC.AsyncQueue(id)) {
		set id = $random(100000000000000000)
	}
	set str = ""
	
	try {	
		$$$TOE(sc, ##class(%Api.Atelier.v1).MonitorBeginCapture(id))
		if '$data(var) {
			write "<UNDEFINED>"
		} else {
			if '$isobject(var) {
				zzdump var
			} else {
				if var.%IsA(##class(%DynamicAbstractObject).%ClassName(1)) {
					zzdump var.%ToJSON()
				} elseif var.%IsA(##class(%Stream.Object).%ClassName(1)) {
					do var.Rewind()
					zzdump var.Read()
				} elseif var.%IsA(##class(EnsLib.HL7.Message).%ClassName(1)) {
					zzdump var.OutputToString()
				} else {
					// zzdump will output OREF string.
					zw var 
				}
			}
		}
		$$$TOE(sc, ##class(%Api.Atelier.v1).MonitorEndCapture(id))
	
		for i=1:1:^ISC.AsyncQueue(id,"cout","i") {
			set str = str _ ^ISC.AsyncQueue(id,"cout",i) _ $$$NL
		}
	} catch ex {
		do ##class(%Api.Atelier.v1).MonitorEndCapture(id)
	}
	kill ^ISC.AsyncQueue(id)
	quit str
}

}

Any valid alphanumeric section name will work. For example:

Class Utils.BO Extends Ens.BusinessOperation
{
Property MySetting;
Parameter SETTINGS = "MySetting:My Custom Category";
}

Will create a new My Custom Category for this business host:

Settings names can be localized by following this guide (categories too probably, but I haven't tried). Domain would be Ensemble.

I think you can guarantee that the picked list would:

  1. Provide the fullest possible coverage of numbers
  2. Skip at least fully superfluous lists

And do it in O(2n) where n is list count (assuming lists are of similar length).

 

Before anything, zero-init a list for each number (called a number list). You'll need to do two passes over your lists.  

On a first pass, check each list value against your number list. If at least one corresponding value in a number list is a zero (meaning our current list has a number we did not encounter before), add the list to the result and increment each position in a number list that is present in a current list by 1.

In our case:

 

Numbers: 0, 0, 0, 0, 0, 0, 0, 0, 0

List(1)="3,5,6,7,9"

As Numbers(3)==0, we add List(1) to the output and modify Numbers:

Numbers: 0, 0, 1, 0, 1, 1, 1, 0, 1

 

In a similar vein, we iterate all our lists (and add all of them, actually); our Numbers after the first pass should look like this:

Numbers: 1, 2, 1, 2, 2, 3, 2, 2, 4

Lists: 1, 2, 3, 4, 5

 

Now do a second pass, only over lists added on a first pass. If every element in a list has a value >1 in a number list, remove the list and decrease the corresponding number list by 1.

 

List(1)="3,5,6,7,9"

Numbers: 1, 2, 1, 2, 2, 3, 2, 2, 4

Numbers(3)==1, so this list remains.

 

List(2)="1,2,6,9"

Numbers(1)==1, so this list remains.

 

List(3)="5,8,9"

Numbers(5)==2>1, Numbers(8)==2>1,  Numbers(5)==4>1,  so we are removing this list, new numbers:

Numbers: 1, 2, 1, 2, 1, 3, 2, 1, 3

 

List(4)="2,4,6,8"

Numbers(8)==1, so this list remains.

 

List(5)="4,7,9"

Numbers(4)==2>1, Numbers(7)==2>1,  Numbers(9)==3>1,  so we are removing this list, new numbers:

Numbers: 1, 2, 1, 1, 1, 3, 1, 1, 2

Lists: 1, 2, 4

 

This, however, does not guarantee that it's a minimum amount of lists, but entirely superfluous lists would be removed, and all possible numbers would be present (have at least one reference in a number list).

 

Another way I thought it could be resolved is by transposing the lists into numbers like this:


Number(1)=$lb(2)
Number(2)=$lb(2, 4)
Number(3)=$lb(1)
Number(4)=$lb(4, 5)
Number(5)=$lb(1, 3)
Number(6)=$lb(1, 2, 4)
Number(7)=$lb(1, 5)
Number(8)=$lb(3, 4)
Number(9)=$lb(1, 2, 3, 5)


After that is done, any number with just one reference must be picked (meaning it's present in only one list). In our case, numbers 1 and 3, resulting in picking lists 2 and 1.


All numbers in lists 1 and 2 must also be picked: 1, 2, 3, 5, 6, 7, 9


Next, we delete Numbers that we already picked, leaving us with:

Number(4)=$lb(4, 5)
Number(8)=$lb(3, 4)

From the remaining Numbers, we need to remove lists that we already picked (so 1 and 2), but in your example, they are not present anyway.


However, after this cleanup, we might encounter a situation where a number is present in only one list. In that case, the first step needs to be redone again until we arrive at a situation where no number is present only in one list, so in our case:


Number(4)=$lb(4, 5)
Number(8)=$lb(3, 4)


After that, pick a list with the largest amount of different numbers - 4 in our case and repeat from the beginning. Eventually, you'll arrive at empty Numbers local, meaning the task is complete.

In the main method of EnsLib.HTTP.OutboundAdapter - SendFormDataArray line 5 you can see the following code:

#; Create an Http Request Object
Set tHttpRequest=$S($$$IsdefObject(pHttpRequestIn):pHttpRequestIn,1:##class(%Net.HttpRequest).%New())  $$$ASSERT($IsObject(tHttpRequest)&&tHttpRequest.%IsA("%Net.HttpRequest"))

Which creates a new empty %Net.HttpRequest object, unless pHttpRequestIn (3rd arg) has been passed with a custom request object. Most wrapper methods (VERB, VERBURL and VERBFormDataArray) do not pass pHttpRequestIn so you should get a fresh request object every time, but SendFormData and SendFormDataURL would pass pHttpRequestIn  from caller, if given.

Another place to look into is a Send call:

Set tSC=tHttpRequest.Send($ZCVT(pOp,"U"),$G(pURL,..URL),..#DEBUG)

It has the following signature:

Method Send(type As %String, location As %String, test As %Integer = 0, reset As %Boolean = 1) As %Status

reset arg defaults to 1, and when it's true the Reset method of %Net.HttpRequest is called after every request. Reset method removes headers (among other things). As you can see in the implementation:

Kill i%Headers,i%FormData,i%Params

EnsLib.HTTP.OutboundAdapter never calls Send with reset=0, so the request should be reset every time.

That said, how do you call SendFormDataArray?

I took a look at the article you linked to. Can you add logging to GetRequest method? It should be called on every business operation message.

How about the following setup:

  1. In Business Service or first router BP set a global: ^data(mrn, timestamp) = messageId
  2. In your FIFO BP before sending to BO check: 
    • set nexttimestamp=$o(^data(mrn,""),1,nextmessageId)
  3. If nextmessageId equals current messageid that means there's no message in a pipeline for the same patient with earlier timestamp so we can send it out.
    • Kill  ^data(mrn, nexttimestamp) so next message can be processed 
  4. If nextmessageId does not equal current messageid, compare timestamps:
    • If timestamps are equal, send the message anyway and don't kill the subscript - we have more than one message with the same timestamp. If it happens often, value should be a list of ids.
    • If nexttimestamp is earlier than timestamp, it means there are some other messages in a pipeline with the same MRN, sleep for 10 seconds and check again.

Notes:

  1. You'll need to adjust this based on what you want to do if one of the messages errors before being deleted from the ^data global, options:
    • Processing of messages for this patient effectively stops.
    • Add an additional check in (4) - get other message header and check if it's in a final state (completed, errored, etc) - if so clear ^data subscript and continue.
    • Add an additional check in (4) -  if we waited more than X seconds, continue.
  2. This can be wrapped as Custom Functions and called from rules, BPs.
  3. Locks might help with ensuring consistency.

The advantage here is that you can scale to any number of jobs immediately and since you enforce FIFO only at the end, most of the processing can be parallelized.

There's no guarantee that BP with PoolSize=1 would process messages in enqueued order. That is only true for BOs, for BPs it's only true that message processing would start in the enqueued order. That might be (or might be not) a good enough guarantee for you depending on your use case.

An approach I saw used to guarantee FIFO for BP is to add an intermediate BP with PoolSize=1 which sends a sync request to your target BP and waits for an answer.  

Can you elaborate on a Message key, please? Is it a random or a categorical division? Why three specifically and not some other number?

What we were considering doing is increasing the pool size to 3 and programmatically creating a BPL on each thread that processes messages that would be directed to it, rather than having to create multiple BPLs into the production.

What do you want to achieve with that change?