3.00.5

Changes Made to Match Industry Practices

  • Added the covarp function and its related functions to the system initialization script file (dolphindb.dos). When upgrading the server, this file must also be updated.

  • Changed the format of subscribed streaming topics:

    • The previous format is host:port:nodeAlias/tableName/actionName, for example, localhost:8848:nodeA/trades/demo.

    • In the new version, the topic format for standard subscription to regular stream tables remains unchanged, which is host:port:nodeAlias/tableName/actionName.

    • In the new version, the topic format for standard subscription to high-availability stream tables is clusterName_RaftGroupId/tableName/actionName.

    • In the new version, the topic format for high-availability subscription to regular stream tables is host:port:nodeAlias/tableName/actionName/subNodeAlias.

    • In the new version, the topic format for high-availability subscription to high-availability stream tables is clusterName_RaftGroupId/tableName/actionName/subNodeAlias.

  • Deprecated configuration items: streamingHADiskLimit, streamingHAPurgeInterval.

  • Renamed streamingHADir to streamingHALogDir, and changed the default value to /log/streamingHALog.

  • Before enabling high availability for streaming processing, you must set clusterName in the configuration file; otherwise, high availability cannot be enabled.

  • Renamed the cacheLimit parameter to cacheSize in the haStreamTable function.

  • High-availability stream tables no longer support adding columns.

  • When canceling a high-availability subscription, the raftGroup must be specified; otherwise, the cancellation will fail.

  • Streaming engines currently do not support high availability. The raftGroup parameter does not take effect for them.

  • Deprecated functions amortizingFixedRateBondDirtyPrice, convertibleFixedRateBondDirtyPrice, callableFixedRateBondDirtyPrice, floatingRateBondDirtyPrice, crmwCBond, cds, and irs.

  • OLAPCacheEngineSize and TSDBCacheEngineSize cannot exceed 50% of the node's maxMemSize. Exceeding this limit will cause the node to fail to start. The setOLAPCacheEngineSize and setTSDBCacheEngineSize functions will report an error when attempting to increase values beyond the limit.

  • After the restoreSettings function is executed, the current user will be forcibly logged out and must log in again to regain the corresponding permissions.

  • createTimeSeriesEngine and createDailyTimeSeriesEngine: When the fill parameter is specified, previous versions do not populate the array vector columns, while the new version does.

  • When createTimeSeriesEngine and createDailyTimeSeriesEngine are called with multiple windowSize values:

    • Previous versions only output data when there is data in each window.

    • The new version outputs data when there is data in the largest window, and returns an empty value for the factor corresponding to windows with no data.

    metrics=[<last(col1)>, <last(col2)>]
    st1 = streamTable(1000000:0, `timestamp`col1`col2,[TIMESTAMP,DOUBLE,DOUBLE])
    st2 = streamTable(100:0,`timestamp`col1`col2,[TIMESTAMP,DOUBLE,DOUBLE]);
    demoEngine=createDailyTimeSeriesEngine(name="demoEngine",windowSize=[1000, 5000],step=1000,metrics=metrics,dummyTable=st1,outputTable=st2,timeColumn=`timestamp);
    
    timestampv = temporalAdd(2026.01.01T00:00:00.001, [0, 1, 2, 4, 8], "s")
    col1 = double(1..5)
    col2 = double(11..15)
    tmp = table(timestampv as timestamp,  col1 as col1, col2 as col2)
    demoEngine.append!(tmp)
    
    st2
    /*
    Previous versions:
    timestamp               col1 col2
    ----------------------- ---- ----
    2026.01.01T00:00:01.000 1    11  
    2026.01.01T00:00:02.000 2    12  
    2026.01.01T00:00:03.000 3    13  
    2026.01.01T00:00:05.000 4    14  
    
    New version:
    timestamp               col1 col2
    ----------------------- ---- ----
    2026.01.01T00:00:01.000 1    11  
    2026.01.01T00:00:02.000 2    12  
    2026.01.01T00:00:03.000 3    13  
    2026.01.01T00:00:04.000      13  
    2026.01.01T00:00:05.000 4    14  
    2026.01.01T00:00:06.000      14  
    2026.01.01T00:00:07.000      14  
    2026.01.01T00:00:08.000      14  
    */
  • If the window is not 0:0 in createWindowJoinEngine and createNearestJoinEngine:

    • Previous versions do not support outputting non-aggregated results, while the new version does.

    • When the calculation of the metric does not output a column corresponding to an array vector, but instead outputs a table, previous versions would report an error, while the new version automatically converts the result into an array vector.

  • createWindowJoinEngine and createNearestJoinEngine: In previous versions, the left table's array vector columns can be used as input vectors for aggregate functions. This functionality is no longer supported in the new version, but can be achieved using the byRow function, as shown in the example below.

    share streamTable(1:0, `timev`sym`id`price, [TIMESTAMP, SYMBOL, INT, DOUBLE[]]) as leftTable
    share streamTable(1:0, `timev`sym`id`val, [TIMESTAMP, SYMBOL, INT, INT]) as rightTable
    output = table(1:0, `timev`sym`price`val`factor, [TIMESTAMP, SYMBOL, DOUBLE[], INT[], DOUBLE])
    
    wjEngine=createWindowJoinEngine(name="testWindowJoin", leftTable=leftTable, rightTable=rightTable, outputTable=output, window=0:0, metrics=<[price, val, percentile(price, 20)]>, matchingColumn="sym", timeColumn="timev", useSystemTime=false, garbageSize=5000, maxDelayedTime=3000,outputElapsedMicroseconds=false, sortByTime=false)
    // In previous versions, the function runs as expected.
    // In the new version, the function reports an error: Invalid metric percentile(price, 20). Usage: percentile(X, percent, [interpolation='linear']). X must be a numeric vector.
    
    // In the new version, use the following script to achieve the original calculation logic
    wjEngine=createWindowJoinEngine(name="testWindowJoin", leftTable=leftTable, rightTable=rightTable, outputTable=output, window=0:0, metrics=<[price, val, byRow(percentile{,20}, price)]>, matchingColumn="sym", timeColumn="timev", useSystemTime=false, garbageSize=5000, maxDelayedTime=3000,outputElapsedMicroseconds=false, sortByTime=false)
  • In previous versions, the createNearestJoinEngine function does not support using the array vector column of the left table as a calculation metric, while the new version does.

  • The ON clause in multi-table JOIN no longer supports the use of aggregate functions.

  • Changed the default value of enableORCA from true to false.

  • Added permission management for Orca, which requires user to obtain corrsponding permissions before executing Orca-related operations.

  • Changed permission for reading Orca stream tables across clusters from TABLE_READ to ORCA_TABLE_READ.

  • Split the notional field into notionalAmount and notionalCurrency for the INSTRUMENT object.

  • Renamed the Spot type in the mktDataType field to Price for the MKTDATA object.

  • The getClusterDFSDatabases and getClusterDFSTables functions no longer allow non-admin users to view Orca-specific system databases and tables. Admin users can view them by setting the includeSysDb and includeSysTable.

  • dropStreamTable no longer supports deleting Orca stream tables; use dropOrcaStreamTable to delete them. dropStreamEngine no longer supports deleting Orca engines; use dropStreamGraph to delete the stream graph to which the engines belong.

System Impacts Caused by Bug Fixes

  • For in-memory tables, the data type of the result when adding time-type data has changed:

    • In previous versions, the results were of the time type.

    • In the new version, the results are of the integer type.

    t1 = table([2026.01.01,2026.01.02,2026.01.03] as timeCol)
    t2 = table([2026.02.01,2026.02.02,2026.02.03] as timeCol)
    t1 + t2
    
    /*
    Results in previous versions:
    timeCol   
    ----------
    2026.01.01
    2026.01.02
    2026.01.03
    
    Results in the new version:
    timeCol
    -------
    40939  
    40941  
    40943  
    */
  • When calling the row and at functions on an array vector, the behavior has changed:

    • row: In previous versions, row(start:end) was used to retrieve multiple consecutive rows. The issue was fixed in the new version.

    • at: In previous versions, at([index1 index2]) could be used to get a specific row. In the new version, this throws an error, and it can be rewritten as at([index1,index2]).

    a = array(INT[], 0, 15).append!([1 2 3, 5 6, 7 8 9, 10 11 12 15])
    a.row(0:2)
    /*
    Previous versions: output:[1,5]
    New version: output:[[1,2,3],[5,6]]
    */
    
    a.at([1 2])
    /*
    Previous versions: output:[[5,6],[7,8,9]]
    Error reported in new version: Invalid index
    Write a.at([1,2]) in new version:output:[[5,6],[7,8,9]]
    */
  • When the publish-subscribe functionality is disabled ( maxPubConnections=0), the behavior of the getStreamingStat function has changed:

    • The function was executed as expected in previous versions.

    • The function reports an error in the new version.

  • When data is written to a keyed stream table, if the data types of the key and value columns are inconsistent, the write behavior has changed:

    • In previous versions, remove duplicate entries based on the key-value columns of the original data, convert the data types, and then write the data.

    • In the new version, convert the data types, remove duplicate entries based on the converted key-value columns, and then write the data.

    kt = keyedStreamTable(`date,1:0,`date`value,[DATE,DOUBLE])
    kt.append!(table(2026.01.01 12:00:00.000 as date, 1.1 as value))
    kt.append!(table(2026.01.01 12:00:01.000 as date, 2.2 as value))
    kt.size()
    /*
    Previous versions: output:2
    New version: output:1
    */
  • Added metric validation for the anomaly detection engine created by createAnomalyDetectionEngine. If the metric is not a boolean expression:

    • The function was executed as expected in previous versions.

    • The function reports an error in the new version.

Upgrade Notes

Due to compatibility issues with permission files, you need to back up permission-related ACL files before upgrading. If a Raft cluster is used, Raft-related files must also be backed up. The list of files to back up is as follows:

Environment HomeDir path reference Files to back up
Single node <YourPath>/server/local8848

<homeDir>/sysmgmt /

  • aclCheckPoint.meta

  • aclEditlog.meta

Regular cluster <YourPath>/server/cluster/data/<controllerAlias>

<homeDir>/sysmgmt/

  • aclCheckPoint.meta

  • aclEditlog.meta

High availability cluster

<YourPath>/server/clusterDemo/data/<controllerAlias_k>

All controller nodes in the Raft group need to be backed up.

<homeDir>/sysmgmt/

  • aclCheckPoint.meta

  • aclEditlog.meta

<homeDir>/raft/

  • raftHardstate

  • raftSnapshot

  • raftWAL