1.30.21

DolphinDB Server

New Features

  • Added new configuration parameter logicOrIgnoreNull. The default value is true, which means to ignore NULL values in the operands. It should be set to false if you need the behavior of the or function to be consistent with old versions. (1.30.21.4)

  • is null is now supported in a case when clause. (1.30.21.4)

  • Added configuration parameter mvccCheckpointThreshold to set the threshold for the operations to trigger a checkpoint. (1.30.21.3)

  • Added function forceMvccCheckpoint to manually trigger a checkpoint. (1.30.21.3)

  • Added license server to manage resources for nodes specified by license. (1.30.21)

    • Related functions: getLicenseServerResourceInfo, getRegisteredNodeInfo.

    • Related configuration parameters: licenseServerSite, bindCores.

  • Added configuration parameter thirdPartyAuthenticator to authenticate user login over the third-party system. (1.30.21)

  • Server version is now automatically checked when a plugin is loaded. (1.30.21)

  • Support asynchronous replication across clusters for data consistency and offsite disaster recovery. (1.30.21)

  • Support Apache Arrow format. (1.30.21)

  • Added command setMaxConnections to dynamically configure the maximum connections on the current node. (1.30.21)

  • Added function demean to center a data set. This function can be used as a state function in the reactive state engine. (1.30.21)

  • Added parameter ordered for functions dict and syncDict to create an ordered dictionary where the key-value pairs are sorted in the same order as the input. (1.30.21)

  • Added support for the following binary operations (1.30.21):

    • dictionary and dictionary

    • scalar and dictionary

    • vector and dictionary

  • Added cumulative function cumnunique to obtain the cumulative count of unique elements. This function can be used as a state function in the reactive state engine. (1.30.21)

  • Added function stringFormat to generate strings with specified values and placeholders. (1.30.21)

  • Added function nanInfFill to replace NaN and Inf values. (1.30.21)

  • Added function byColumn to apply functions to each column of a matrix. The function is also supported in stream processing. (1.30.21)

  • Added function volumeBar for data grouping based on the cumulative sum. (1.30.21)

  • Added function enlist to return a vector (or tuple) with a scalar (or vector) as its element. (1.30.21)

  • Added operator eachAt(@) to access elements of vector/tuple/matrix/table/dictionary by index. (1.30.21)

  • Added new functions latestKeyedTable and latestIndexedTable to create a keyed table or indexed table with a time column. When a new record is appended to the table, it only overwrites the existing record with the same primary key if its timestamp is larger than the original one. (1.30.21)

  • Enhanced support for ANSI SQL features (1.30.21):

    • Clauses: drop, alter, case when, union/union all, join on, with as, create local temporary table

    • Predicates: (not) between and, is null/is not null, (not) exists/not exist, any/all

    • Functions: nullIf, coalesce

    • Keywords: distinct

  • Support multiple joins, join with table aliases, and join with a table object returned by a SQL subquery. (1.30.21)

  • SQL select can select a constant without specifying an alias, and the value will be used as the column name. (1.30.21)

  • SQL predicates and operators can be applied to the result table returned by a SQL subquery. (1.30.21)

  • Added configuration parameter oldChunkVersionRetentionTime to specify the retention time for old chunk versions in the system. (1.30.21)

  • Support built-in trading calendars of major exchanges and user-defined trading calendars. These calendars can be used in functions temporalAdd, resample, asFreq, and transFreq for frequency conversion. (1.30.21)

    • Related configuration parameter: marketHolidayDir

    • Related functions: addMarketHoliday, updateMarketHoliday, and getMarketCalendar to add, update and get user-defined trading calendars.

  • Added functions genericStateIterate and genericTStateIterate to iterate over streaming data with a sliding window. (1.30.21)

  • Support if-else statement in the reactive state engine. (1.30.21)

Improvements

  • Added keyword distinct to eliminate duplicate records. It is currently not supported to be used with group by, context by, or pivot by. (1.30.21.6)

  • The outputElapsedInMicroseconds parameter of function createTimeSeriesEngine is renamed to outputElapsedMicroseconds. (1.30.21.4)

  • The fields "createTime" and "lastActiveTime" returned by function getSessionMemoryStat are now displayed in local time. (1.30.21.4)

  • Enhanced support for between and with ANSI SQL features. (1.30.21.4)

  • More operations on the IPC in-memory tables are logged for better tracking and debugging. (1.30.21.4)

  • Function getClusterDFSTables returns DFS tables to which the user has access. (1.30.21.3)

  • Parameter handler of function subscribeTable supports shared in-memory table, keyed table, and indexed table. (1.30.21.3)

  • Function cut now supports tables/matrices. (1.30.21.3)

  • Ordered Dictionary now supports unary window functions. (1.30.21.3)

  • Support checksum for the metadata files. (1.30.21)

  • To avoid excessive disk usage of recovery log, recovery tasks now will be cleared for the follower which has been switched from the leader. (1.30.21)

  • All backup and restore activities are fully logged. (1.30.21)

  • Added new parameters close, label, origin for function interval. (1.30.21)

  • Function getRecentJobs returns "clientIp" and "clientPort" indicating the client IP and port. (1.30.21)

  • Added new parameter warmup for function ema. If set to true, elements in the first (window-1) windows are calculated. (1.30.21)

  • The unary and ternary functions now can be specified for higher-order functions accumulate and reduce. (1.30.21)

  • Added new parameter outputElapsedMicroseconds for the reactive state engine and time-series engine to output the elapsed time. (1.30.21)

  • Added new parameter precision for functions rank and rowRank to set the precision of the values to be sorted. (1.30.21)

  • Parameter mode of function groups supports "vector" and "tuple". (1.30.21)

  • Function linearTimeTrend supports calculations of a matrix or table. (1.30.21)

  • Added support for iterations using multiple higher-order functions. Added new parameter consistent for higher-order functions eachLeft, eachRight, eachPre, eachPost, and reduce to determine the results' data type and form of tasks. (1.30.21)

  • Parameter tiesMethod of function rank and rowRank supports "first" to assign ranks to equal values in the order they appear in the vector. (1.30.21)

  • Function cut now supports scalar. (1.30.21)

  • Rows of a matrix can now be accessed with slice. (1.30.21)

  • The size of a tuple is no longer limited to 1048576. (1.30.21)

  • The defaultValue argument passed to function array now supports STRING type. (1.30.21)

  • Function memSize now returns the memory usage of tuple. (1.30.21)

  • Query results of different partitions now can be combined through multiple threads to reduce the elapsed time of merge phase. (1.30.21)

  • Function getSessionMemoryStat now returns related cache information. (1.30.21)

  • Column comments now can be added to mvcc tables with setColumnComment. (1.30.21)

  • Matrices now can be accessed with pair and vector as index. (1.30.21)

  • Modified the actual available memory configured by regularArrayMemoryLimit. (1.30.21)

  • Modified the upper limit on the number of DFS databases and tables. (1.30.21)

  • The filter condition of function streamfilter supports built-in functions. (1.30.21)

  • Added new parameter sortColumns for function replay to sort the data with the same timestamp. (1.30.21)

  • Support automatic alignment of data sources for N-to-1 replay. (1.30.21)

  • The window size is capped at 102400 when m-functions are used in the streaming engines. (1.30.21)

  • Optimized the performance of heterogeneous replay. (1.30.21)

  • Function streamEngineParser now supports function byRow nested with function contextby as metrics for the cross-section engine. (1.30.21)

  • Support higher-order function accumulate in streaming. (1.30.21)

  • Optimized the performance of function genericTStateIterate. (1.30.21)

  • Optimized the performance of function streamEngineParser. (1.30.21)

  • append / insert into operations on shared tables can be implemented with statement transaction. (1.30.21)

  • Optimized the performance of ej on partitioned tables. (1.30.21)

  • The select statement now supports using column alias or new column name as the filter condition in the where clause. (1.30.21)

  • Optimized the performance of keyword pivot by when the last column is the partitioning column. (1.30.21)

  • The keyword context by now supports specifying matrix and table. (1.30.21)

  • Optimized the performance of context by and group by. (1.30.21)

  • Optimized the performance of lsj at large data volumes. (1.30.21)

  • The temporal data types in a SQL where clause can now be automatically converted when interval is used to group data. (1.30.21)

  • The size of a tuple is no longer limited when used in SQL in condition. (1.30.21)

  • Modified the return value of function getSystemCpuUsage. (1.30.21)

  • Enhanced support for access control (1.30.21):

    • Extended privilege types at table level (TABLE_INSERT/TABLE_UPDATE/TABLE_DELETE) and database level (DB_INSERT/DB_UPDATE/DB_DELETE).

    • Modified DB_MANGE privilege which no longer permits database creations. Users with this privilege can only perform DDL operations on databases.

    • Modified DB_OWNER privilege which enables users to create databases with specified prefixes.

    • Added privilege types QUERY_RESULT_MEM_LIMIT and TASK_GROUP_MEM_LIMIT to set the upper limit of the memory usage of queries.

    • Access control-related functions now can be called on data nodes.

    • Modified the permission verification mechanism of DDL/DML operations.

    • Added parameter validation for access control:

      • An error is reported if the granularity of objs does not match the accessType of grant, deny, or revoke.

      • When the TABLE_READ/TABLE_WRITE/DBOBJ_*/VIEW_EXEC permission is granted, the existence of the applied object (database/table/function view) is checked first. If it does not exist, an error is reported.

      • When an object (database/table/function view) is deleted, the applied permissions are revoked. If a new object with the same name is created later, the permissions must be reassigned.

      • Permissions are retained for renamed tables.

  • Optimized the performance of user-defined functions in streaming engines in DolphinDB (JIT). (1.30.21)

  • DolphinDB (JIT) supports operator ratio. (1.30.21)

  • DolphinDB (JIT) supports more built-in functions: sum, avg, count, size, min, max, iif, moving. (1.30.21)

Issues Fixed

  • A function name conflict occurred for the function view and module function at the server restart when the following conditions were satisfied at the same time (1.30.21.6):

    • in a standalone mode;

    • the function view was dropped after the module function was added to it;

    • the function defined in the module was passed to the addFunctionView, and the function view was dropped then;

    • the module was specified in the configuration parameter preloadModules to be preloaded.

    The error messages reported for other conflicts were enhanced.

  • In a cluster mode, when SSL was enabled (enableHTTPS=true) for connection, the session may be disconnected if a large amount of data was transferred from the server to the client. (1.30.21.6)

  • In a cluster mode, when joining tables under the same database (atomic = 'CHUNK') but on different nodes, incorrect results may be returned. (1.30.21.6)

  • The reactive state engine did not handle the namespaces defined in metrics. (1.30.21.6)

  • Incorrect results were returned by function mskew or mkurtosis if the input X contains consecutive identical values and the number of identical values is greater than window. (1.30.21.6)

  • An error occurred when using order by on columns of STRING type with limit 0, k or limit k on MVCC tables. (1.30.21.5)

  • When deleting a function view with dropFunctionView, a server crash may occur due to the absence of locking during log writing. (1.30.21.5)

  • When joining two tables with equi join or inner join, incorrect results were returned if the two matching columns are of STRING and NANOTIMESTAMP types. (1.30.21.5)

  • When loading tables with loadTable, data loss may occur on the cold storage tier if the table names were improperly verified. (1.30.21.5)

  • The select distinct statement is disabled. The keyword "distinct" is recognized as function distinct, i.e., the order of the elements in the result is not guaranteed to be the same as the input, and the column name is distinct_xxx. (1.30.21.5)

  • When the configuration parameter datanodeRestartInterval was set to a time less than 100 seconds, the data node was immediately restarted by the controller in a graceful shutdown situation or after the cluster was restarted. (1.30.21.4)

  • Incorrect conversion when the input of function toJson was a tuple which contains numeric scalars. (1.30.21.4)

  • Incorrect conversion when the input of function toJson was a dictionary with its values being vectors of ANY type. (1.30.21.4)

  • A server crash may occur when function bar with parameter interval set to 0 was used to query a partitioned table. (1.30.21.4)

  • For N-to-1 replay, an error was reported when the key of the dictionary (set by parameter inputTables) was specified as SYMBOL type. This bug occurred since version 1.30.21. (1.30.21.4)

  • Scheduled jobs failed to be executed due to the unsuccessful deserialization of file jobEditlog.meta at node startup. (1.30.21.4)

  • Scheduled jobs were still executed until the next server startup, even though the serialization was unsuccessful when they were created. (1.30.21.4)

  • A server crash occurred when the defaultValue parameter of function array is specified as a vector. (1.30.21.4)

  • Passing non-table data to the newData parameter of upsert! could crash the DolphinDB server. (1.30.21.4)

  • The upsert!() function would fail when the following three conditions were satisfied at the same time (1.30.21.4):

    • Only one record was to be updated

    • The newData parameter contained NULL values

    • The ignoreNull parameter was set to true

  • Attempting to add multiple new columns to an MVCC table in an update statement would result in a data type error. (1.30.21.4)

  • When specifying a column containing special characters such as control characters, punctuation marks, and mathematical symbols in the group by clause of a query, these special characters were improperly ignored. (1.30.21.4)

  • dropColumns! could not delete in-memory tables with sequential partitions. (1.30.21.4)

  • A controller may crash when loading a partitioned table from the local disk. (1.30.21.4)

  • Function getClusterDFSTables may return tables that have been deleted or do not exist. (1.30.21.4)

  • The physical paths of partitions may not match the metadata after new data nodes are added and moveReplicas() is executed. (1.30.21.4)

  • For N-to-N replay, if an element of the input data source for a table was empty, data in the output table may be misplaced. (1.30.21.4)

  • Occasional failures of creating a streaming engine due to uninitialized internal variables. (1.30.21.4)

  • For operations involving data flushing, data may be lost or the operations may get stuck if the physical directory of a partition did not exist (e.g., it had been manually deleted). (1.30.21.4)

  • Incorrect result of the temporalAdd function when specifying the parameter unit as "M". (1.30.21.4)

  • Data storage error may occur when different operations were performed on the same partition. (1.30.21.3)

  • Users who are given DB_READ or TABLE_READ privileges may not be able to execute queries. (1.30.21.3)

  • Server crashed in a high-availability cluster when reading raft logs at the reboot of the controller. (1.30.21.3)

  • Server crashed when using loadText to load a CSV file that contains unpaired double quotes (""). (1.30.21.3)

  • After new columns were added to an MVCC table, a server crash occurred when checking the table schema with function schema or adding comments to the new columns with function setColumnComment. (1.30.21.3)

  • Invisible characters in the partitioning column resulted in inconsistent versions between controller and data node. (1.30.21.3)

  • Server crashed when updating the in-memory table with index out of bounds. (1.30.21.3)

  • A server crash may occur when users login frequently in high-concurrency scenarios. (1.30.21.3)

  • Server crashed when a user-defined function was specified for the metric of function StreamEngineParser. (1.30.21.3)

  • When the stream table was not defined on the publisher, the reconnection on the subscriber resulted in file descriptor leaks. (1.30.21.3)

  • Server crashed when using function parseExpr to convert a string containing a lambda function. (1.30.21.3)

  • An error was reported when using function parseExpr to convert a string ending with a semicolon. (1.30.21.3)

  • Server crashed when using function RepartitionDS to repartition a joined table and parameter partitionType is specified as VALUE. (1.30.21.3)

  • If the matching columns of two partitioned table were not partitioning columns, and some partitioning columns of two tables have the same column name(s), incorrect result was returned when filtering the data with the partitioning columns of the right table. (1.30.21.3)

  • For a DFS table value-partitioned by month, an incorrect result was output when filtering the data on the first day of a month by the where condition. (1.30.21.3)

  • Server crashed when using order by in conjunction with limit to sort column "DATE (case-sensitive)" in reverse order. (1.30.21.3)

  • An incorrect result was output when performing time-series aggregate functions (e.g., pre, rank, etc.) on multiple columns. (1.30.21.3)

  • The data in the in-memory table returned by the aggregation function will be changed when it was subsequently calculated by moving functions such as move. The data in the in-memory table returned by the aggregation function will be changed when it was subsequently calculated by moving functions such as move. (1.30.21)

  • An error was reported when anonymous aggregate function was specified for aggs of window join. (1.30.21)

  • For a DFS table value-partitioned by month, an incorrect result was output if the temporal type specified in the where clause was inconsistent with that of table columns, and the where condition contained the last day of the month. (1.30.21)

  • Server crashed when the independent variables (parameter X) of function ols was specified as a string. (1.30.21)

  • Server crashed when using function loadText to import data of string type. (1.30.21)

  • Server crashed when an MVCC table is used in a transaction statement. (1.30.21)

  • Incorrect results for using as with function deltas in conjunction with function corr. (1.30.21)

  • Terminating the DolphinDB process with kill -9 command may cause redo logs not to be removed. (1.30.21)

  • A crash may occur when a table containing string columns was calculated in a reactive state engine. (1.30.21)

  • Submitting a metacode containing undefined variables via submitJob resulted in a crash. (1.30.21)

  • A node crash may occur after recovery from network failure in a cluster. (1.30.21)

  • Using partial application in metaprogramming with context by could obtain incorrect results. (1.30.21)

  • sqlObj could not be recognized as metacode in replayDS. (1.30.21)

  • If the left table of lj is an in-memory table and the right one is a DFS table which is located under a multilevel directory (e.g., dfs://mydbs/quotedb), an error would be reported. (1.30.21)

  • An error was reported when the metric of function createTimeSeriesAggregator contained a keyColumn. (1.30.21)

  • The getClusterPerf function caused deadlocks when executed by two nodes at the same time in a high-availability cluster. (1.30.21)

  • A crash may occur when the accumulate function was executed multiple times. (1.30.21)

  • After the execution of function createDailyTimeSeriesEngine was completed, an error may be reported for the data of temporal type in query results in some scenarios. (1.30.21)

  • Unexpected result returned by function isValid when adding two empty strings. (1.30.21)

  • NULL values were returned when more than 128 filtering conditions connected with keyword or were specified in the where clause. (1.30.21)

  • An exception thrown by function loadText may lead to deadlocks under high load. (1.30.21)

  • After the function was added to function view, the body queried by function getFunctionView had one less pair of brackets. (1.30.21)

  • Server crashed when a string vector was retrieved by slicing with index out of bounds. (1.30.21)

  • A crash may occur when using higher-order function each to apply a user-defined function to a table. (1.30.21)

  • No exception was thrown when the data appended to the cross sectional engine did not match the schema of dummy table. (1.30.21)

  • When joining DFS tables with the matching column different from the partitioning column, if the join result was queried by a select top clause and order by partitioning column, an incorrect result was returned. (1.30.21)

  • An error was reported when using function rpc or remoteRun to call a partially applied function. (1.30.21)

  • The file storing job logs was lost when it reached 1G. (1.30.21)

  • The number of openBLAS-threads was determined based on the configuration parameter openblasThreads, not on the number of CPU cores. (1.30.21)