2.00.8

New Features

  • New SQL tracing tool for monitoring the time spent on the whole process of query execution. Added configuration parameter traceLogDir to specify the trace log path.

  • Added new function truncate for deleting the data in a DFS table while keeping the table schema.

  • Added new function checkBackup for checking the integrity of backup files. Added new function getBackupStatus for displaying the detailed information about database backup and restore jobs.

  • Added new functions backupDB, restoreDB, backupTable, and restoreTable for backing up / restoring an entire database or table.

  • Added new configuration parameter logRetentionTime for specifying the system log retention period.

  • Added new function triggerNodeReport for triggering a chunk information report for the specified data node.

  • Added new function getUnresolvedTxn for getting the transactions in the resolution phase.

  • The stream engine parser (streamEngineParser) now supports specifying user-defined functions with nested function as its metrics.

  • Added new function conditionalIterate for recursive computation of metrics through conditional iteration. This function can only be used in the reactive state stream engine (createReactiveStateEngine).

  • Added new function stateMavg for calculating the moving average based on previous results. This function can only be used in the reactive state stream engine (createReactiveStateEngine).

  • Function mmaxPositiveStreak can now be used in the reactive state stream engine (createReactiveStateEngine).

  • Added new function stateIterate for linear recursion by linear iteration. This function can only be used in the reactive state stream engine (createReactiveStateEngine).

  • Window join engine (createWindowJoinEngine): When the parameter window=0:0, the size of the calculation window over the right table is determined by the difference between the timestamps of the corresponding record in the left table and its most recent record.

  • Added support for new data type DECIMAL. Storage and computation involving the DECIMAL data type are also supported in some functions and the OLAP and TSDB storage engines. Note that:

    1. DECIMAL type columns cannot be specified as partitioning columns or sort columns (TSDB engine), or compressed using the "delta" method.

    2. DECIMAL type columns cannot be modified or deleted with the functions addColumn/replaceColumn!/dropColumns!/rename!

    3. The DECIMAL data type does not support stream data subscription and stream computing.

    4. loadText does not support importing columns containing DECIMAL values.

  • Added new function regroup for grouped aggregation over a matrix based on user-specified column and/or row labels.

  • Added new functions mifirstNot and milastNot for returning the index of the first/last non-NULL element in a sliding window.

  • Added new function loc for accessing the rows and columns of a matrix by label(s) or a Boolean vector.

  • Added new function til for creating a vector of consecutive integers starting from 0.

  • Added new functions pack and unpack for packing and unpacking binary data.

  • Added new function align for aligning two matrices based on row labels and/or column labels using the specified join method.

  • DFS table join now supports full join.

  • DolphinDB (JIT) now supports accessing vector elements by index which can be a vector or a pair.

  • Web-based User Interface:

    • "Shell" tab enhancements: Added new "Database" view for checking databases and tables.

    • Added new settings menu where you can customize the number of decimal places. For example, enter "2" to display numbers with 2 digits.

    • Added support for visualization of dictionaries.

    • You can now navigate to the associated documentation by clicking the error code (e.g., 'RefId: S00001').

Improvements

  • Functions backup, restore, and migrate support backup and restore of database partitions by copying files.

  • Functions replaceColumn!, rename!, and dropColumn! now support DFS tables.

  • Added new parameter deleteSchema to function dropPartition to determine whether to delete the partition schema when deleting a VALUE partition.

  • Function dropDatabase deletes all physical files for the specified database.

  • Metacode of SQL statements can be passed to the parameter obj of function saveText. Partitions can be queried in parallel and written with a single thread.

  • The system raises an error message if you specify the configuration parameter volumes for a single node using macro variable <ALIAS>.

  • Support SQL like keyword in a where clause to search for a specified pattern of sort keys in the TSDB engine.

  • Optimized the reading performance in TSDB engine.

  • Optimized the performance of update, delete and upsert in the TSDB storage engine.

  • Added parameter nullFill to function createWindowJoinEngine to fill in the NULL values in the output table.

  • The parameter timeRepartitionSchema of function replayDS supports more temporal types.

  • Optimized the garbage collection logic of window join engine.

  • Identical expressions using user-defined functions are only calculated once in the reactive state stream engine.

  • Added SQL keyword HINT_VECTORIZED to enable vectorization for data grouping.

  • Optimized the query performance when the group by column is the VALUE partitioning column.

  • Optimized the performance of left join of an in-memory table and a DFS table.

  • Optimized the performance of SQL clause pivot by.

  • Optimized the computing performance of function rolling.

  • Function getBackupList returns column "updateTime" for the last update time and column "rows" for the number of records in a partition.

  • Added a new key "rows" to the dictionary returned by function getBackupMeta to show the number of rows in a partition.

  • Added optional parameter containHeader to functions loadText, ploadText, loadTextEx, and textChunkDS to indicate whether the file contains a header row.

  • Added access control to 31 functions, which can only be executed by a logged-in user or administrator.

  • updateLicense throws an exception if the authorization mode changed.

  • No exception is thrown if the indices are out of bounds when slicing a vector.

  • When accessing a vector by index, NULL values are returned if the indices are out of bounds in DolphinDB (standard and JIT version).

  • Optimized crc32 algorithm.

  • Optimized function mrank.

  • The maximum length for the data converted by function toJson is no longer limited to 1000.

  • Web-based User Interface:

    • Enhanced code highlighting to keep it consistent with the DolphinDB extension for Visual Studio Code.

    • Numeric values are formatted with comma (,) as the thousands separator, e.g., 1,000,000,000.

    • Updated keywords, code completion, and function documentation.

    • The execution information is displayed in a more compact layout.

    • Enhanced the "status" popover view to display status information in different categories.

    • Enhanced table pagination design and added tooltips for icon buttons.

    • "Job" tab enhancements: Adjusted the field names; Added support for job search by client IP.

    • Fixed an issue where the temporal labels were not correctly formatted in a plot.

Issues Fixed

  • Memory leak occurred when writing STRING type data to a table in a TSDB database.

  • The TSDB engine failed to flush data when configuring the parameter TSDBCacheFlushWorkNum to a value less than volumes.

  • Serialization on a data node failed if the metadata of a partition exceeded 128M.

  • DDL operations on a data node caused partition status errors due to transaction resolution failure.

  • Failure of replaying redo log resulted in partition status errors on the data node.

  • Partition was wrongly deleted due to migration failure when the partition was moved to the configured coldVolume of tiered storage.

  • When creating a database with atomic='CHUNK', slow startup occurred due to excessive metadata on the controller.

  • Deleting a DFS table in a TSDB database with delete caused excessive memory usage.

  • Old data was prematurely reclaimed after update.

  • The original table was not immediately reclaimed after the renameTable operation was performed.

  • Server crashed when specifying a partition path ended with a "/" for function dropPartition.

  • Users cannot delete partitions that were automatically added when creating a DFS table with VALUE-based partitions by specifying conditions for dropPartition.

  • Repeated deletions on an empty table caused cid errors in the metadata stored on the data node.

  • The parameter dfsRecoveryConcurrency did not take effect after configuration.

  • When inserting an array vector into a stream table, the subscription handler failed.

  • Server crashed when passing array vectors to a user-defined function specified for the metrics of createReactiveStateEngine.

  • createReactiveStateEngine failed when specifying factor tablibNull for the metrics.

  • Server crashed when specifying external variables for the metrics of streamEngineParser.

  • Server crashed when the row count of the left table was smaller than the window size in window join.

  • Server crashed when using exec with limit and the number of returned rows is less than limit.

  • The isDuplicated and nunique functions returned incorrect results when working with DOUBLE and FLOAT data types.

  • Calling parseExpr in user-defined functions caused parsing failure.

  • The function getClusterPerf returned incorrect value of maxRunningQueryTime.

  • Server crashed when using loadNpy to read excessively large npy files.

  • Variables defined within a for-loop could not be accessed outside the loop using DolphinDB JIT version.