Without additional details it may be hard to tell for certain.  If it's an issue on Cache side you might consider using $Sequence in your storage model

$SEQUENCE provides a fast way for multiple processes to obtain unique (non-duplicate) integer indices for the same global variable. For each process, $SEQUENCE allocates a sequence (range) of integer values. $SEQUENCE uses this allocated sequence to assign a value to gvar and returns this gvar value. Subsequent calls to $SEQUENCE increment to the next value in the allocated sequence for that process. When a process consumes all of the integer values in its allocated sequence, its next call to $SEQUENCE automatically allocates a new sequence of integer values. $SEQUENCE automatically determines the size of the sequence of integer values to allocate. It determines the size of the allocated sequence separately for each sequence allocation. In some cases, this allocated sequence may be a single integer.

$SEQUENCE is intended to be used when multiple processes concurrently increment the same global. $SEQUENCE allocates to each concurrent process a unique range of values for the gvar global. Each process can then call $SEQUENCE to assign sequential values from its allocated range of values.

In your class definition within the you could consider changing the IdFunction from the default which is Increment to sequence

but again more detail is needed.

Nicely done.  while I haven't contributed to the project I do have a User Snippet

"Create Status Method": {
		  "prefix": "Method Create ClassMethod Returning %Status",
		  "body": [
			"///${1:Description}",
			"///<example>",
			"/// Set tSC=##class($TM_FILENAME_BASE).${2:MethodName}()",
			"///</example>",
			"ClassMethod ${2:MethodName}(${3:parameters}) As %Status",
			" {",
			"  #dim tSC \t\t\t As %Status=\\$\\$\\$OK",
			"  #dim eException \t As %Exception.AbstractException",
			"  try {",
			"   $0",
			"  }",
			"  catch eException {",
			"   Set tSC=eException.AsStatus()",
			"  }",
			" Quit tSC",
			" }"
		  ],
		  "description": "Method Create ClassMethod Returning %Status"
		}

that utilizes the <example> Documatic special tag as I too was tired typing the ##class when testing methods.  My approach is different and it's not perfect but it works for me.  It also produces a class method with a consistent pattern/structure.

I've run into something similar but with a different streamlet type.  The subscript error indicates that it is getting a null value which it expects to be non-null.  I would suggest looking at the medication streamlet and ensuring that all of the required fields are defined.  It might be the Medication.OrderItem.Code is null.

If those fields are not used in the ReportDisplay, ie the HTML or PDF output, you could consider removing them from the ReportDefinition.  If on the other hand, you do need these fields then leave them in.  However, something else seems to be going on.  I commonly have this style of ReportDefinition

The usage of <macrodef > just allows me to have a single set of code that injects the same attributes into the <Master> node.

When running the report in XML mode it produces

<Master runDt="10/13/2022" footerDate="October 13, 2022" runTm="08:47AM" runBy="_SYSTEM" Draft="" ISCPrime="0" Instance="HEALTHSHARE" Namespace="HSEDGE1" Server="LAPTOP-ET8APOSR" InternalURLLinkPrefix="http://localhost:52773" CustomerName="Ready Computing, Inc" CoverPageDate="October 13, 2022" CustomerLink="" PageNumbering="1" SystemMode="" FilterSpecDisplay="" ReportName="Ensemble Message Management" HeaderLine2="" Top="9999999999999" ReportClass="RC.Ens.Report.MessageManagement" ZVERSION="IRIS for Windows (x86-64) 2021.1.2 (Build 336_0_21564U) Tue Apr 12 2022 12:13:45 EDT" ReportTitle="">

I don't think attributes found in the root node, in my case Master, are emitted to the Excel file.

If I run the report in MODE=XLSX mode it produces.. note it does not show the attributes 

so this is answering your question regarding your attributes at the top level.  Your question regarding the elements at the ProssData node is something different.  In my example, if I change 

to 

and then run the report in XLSX mode the attributes Namespace and TaskId do not appear in the Excel file.  However, I would have to change the ReportDisplay so that I now reference

@Namespace and @TaskId

if I want the PDF and HTML output to work properly.

I too am not entirely sure what your issue is but when I have particularly complex reports I leverage the ability to define in the ReportDefinition multiple <group> entities.  Consider for example you have a data model that has

  • Batch
  • Claim
  • ClaimLine
  • ClaimLineAdjudication

while you could define a single entity <group> which calls a single query, in my case I prefer to use Class Queries and then leverage breakOnField and create multiple <groups>, I find it makes more sense to have something like this, albeit this is psuedo code and not 100% what is entered in a Report Definition.

<group name="Batches" queryName="GetBatches">

<group name="Claims" queryName="ClaimsByBatch">

<parameter field="Batch">

<group name="Claim" 

<group name="Lines" queryName="LinesByClaim">

<parameter field="Claim">

<group name="Line">

<group name="LineAdjudications" queryName="AdjudicationByClaimLine">

<parameter field="Claim">

<parameter field="Line">

<group name="Line">

</group>

</group>

</group>

</group>

</group>

</group>

</group>

In this manner each level/group is responsible for doing one thing.  If needed I've also added to the <group> tag a filter element.  An actual example I have is this

fields="ClassName,LocalTimeChanged,RemoteTimeChanged,Description,LocalFkeys,RemoteFkeys,LocalIndices,RemoteIndices,LocalMethods,RemoteMethods,LocalProperties,RemoteProperties,LocalQueries,RemoteQueries,LocalParameters,RemoteParameters,LocalTriggers,RemoteTriggerfilter="..Filter(..FilterSpec)">

whereby I send to my method Filter the field values specified in the attribute fields as well as a FilterSpecification to test the filter.  You don't have to do it this way, its just an example of using the fields attribute and the filter attribute which is any valid objectscript expression.

Based on 

it looks like History.Views may not have indices or at least doesn't have a bitmap index.  If the table had a bitmap index it shouldn't take 30 seconds to count the rows. ... but maybe I'm incorrect.... seeing the query plan would have provided that insight.

You report that the now generates in 60 seconds and previously it 28 minutes for a twelve-page report.  Was the primary problem /time consumed in gathering the data or something else?

some thoughts on what you have presented.

1. It doesn't seem like the usedclassesdef.zip is available any longer

2. When examining a query plan I mostly do CONTROL-F and then search for the word looping. The statements that say "using the given ideky value" generally are not going to be of concern.  I see in your query plan it has "Read index map MSDS_Serenity.CustomeInstrument.InstIndx looping on Instrument and ID."  This seems like one area of concern.  This is telling us I believe that a table scan is being don on CustomerInstrument.  Should there be a join between Instrument and CustomerInstrument?  

3. I don't think it technically makes a difference in performance but I like to write my queries with a JOIN clause so that I have a clear understanding with the relationships between tables vs. conditions that would remove rows.  So I might write it like

 FROM MSDS_Serenity.Kit k

  JOIN  MSDS_Serenity.KitContent KC on Kc.Kitid=k.Id

 JOIN   MSDS_Serenity.Instrument I on I.Id=kc.Instrument

 JOIN   MSDS_Serenity.InstrumentGroup G on G.Id=I.GroupId

  JOIN  MSDS_Serenity.CustomInstrument CI on CI.    >>>maybe I'm incorrect but I don't see where the ci table is related to any other table, this is what is likely causing the table scan, if this table has a large number of rows this could very well be causing an issue

WHERE i.active=1 and (i.IsImpants is null or i.IsImplans!=1) 

    and k.loc=5 and k.id=9237

Again I don't think this makes any technical difference but it does allow me to isloate the relationship between tables vs the filtering that occurs due to a WHERE clause.

4. I'm not saying this is absolutely true but when I see DISTINCT in a query I wonder why it was added.  Is it because of a lack of join that was producing duplicate rows and DISTINCT was a convenient way to "hide" the logic flaw?  I could be completely off base here so please ignore if 'm completely out of line.  This is a good article on DISTINCT https://wiki.c2.com/?SelectDistinctIsaCodeSmell

5. Without knowing what the stored procedures are doing this could be a place where performance is encountering issues.
 

Can you elaborate on what you mean by "hide" any component under ReportDefinition.  The ReportDefinition is where you define the XML document that will be created that is later used by ReportDisplay to output the data for a PDF report, Excel utilizes the ReportDefinition.   Just because you have an entity(element/attribute/group) in a ReportDefinition doesn't mean it will be utilized in the ReportDisplay.