2.00.10
Available Downloads
Version: 2.00.10 Compatibility Level 2 with 2.00.9/1.30.21
For details, see Compatibility Changes in Version 2.00.10
Release Date: 2023-07-20
New Features
-
Added configuration parameter enableShellFunction to determine whether the
shell
function can be called by administrators. (2.00.10.13)
-
Added new function
appendTuple!
to append a tuple to another. (2.00.10.4) -
Added new configuration parameter appendTupleAsAWhole to specify whether the tuple should be appended as an embedded tuple element, or if each of its elements should be appended independently to the target tuple. (2.00.10.4)
-
Added new configuration parameter parseDecimalAsFloatingNumber which sets the default behavior for parsing decimals as the DECIMAL type. (2.00.10.4)
-
Support for update, insert, and delete operations on partitioned MVCC tables. (2.00.10.4)
-
Long-running distributed queris using
select
orpivot by
clause now can be canceled at any time during execution. (2.00.10.4) -
Added new function
cumdenseRank
to return the position ranking from the first element to the current element. (2.00.10.4) -
Added login information in logs, including login user, IP, port, status, etc. (2.00.10.4)
-
Added privilege
VIEW_OWNER
to support a user/group to create function views usingaddFunctionView
. (2.00.10.4) -
In SQL queries with the
PIVOT BY
clause, you can now use theasis
function to retain all duplicate records in the result. Previously,PIVOT BY
would perform deduplication. (2.00.10.4) -
In SQL queries with the
PIVOT BY
clause, an array vector now can be specified in aselect/exec
statement. (2.00.10.4) -
Support for partition pruning when the partitioning column is of the NANOTIMESTAMP type. (2.00.10.4)
-
Added new parameter isSequential to the plugin.txt to mark a function as order-sensitive or not. (2.00.10.4)
-
Added a new "dataInterval" option to the
triggeringPattern
parameter of thecreateCrossSectionalEngine
function. This option enables calculations to be triggered based on timestamps from the input data. (2.00.10.3) -
Added function
parseJsonTable
to parse a JSON object to an in-memory table. (2.00.10.2) -
Added function
loadModuleFromScript
to parse a module dynamically. (2.00.10.2) -
The
transaction
statement can be used on MVCC tables. (2.00.10.2) -
Added new configuration parameter tcpUserTimeout to set the socket option TCP_USER_TIMEOUT. (2.00.10.2)
-
Removed function
getClusterReplicationMetrics
. Added functiongetSlaveReplicationQueueStatus
as an inheritance ofgetClusterReplicationMetrics
.getSlaveReplicationQueueStatus
retrieves the status of each execution queue in the slave clusters. (2.00.10.2) -
Added configuration parameter clusterReplicationQueue to set the number of execution queues on each controller of the slave clusters. (2.00.10.2)
-
Added configuration parameter clusterReplicationWorkerNum to set the number of workers on each data node of the slave clusters. (2.00.10.2)
-
Added support for
RIGHT JOIN
on multiple DFS tables. (2.00.10) -
Added configuration parameter memLimitOfTempResult and function
setMemLimitOfTempResult
to set the upper limit of memory usage for each temporary result generated in the table join operation. (2.00.10) -
Added configuration parameter tempResultsSpillDir to specify the spill directory storing the temporary results generated in the table join operation. (2.00.10)
-
Added configuration parameter enableCoreDump to enable core dumps. It is only supported on Linux. (2.00.10)
-
Added configuration parameter disableCoreDumpOnShutdown to specify whether to generate core dumps on a graceful shutdown. It is only supported on Linux. (2.00.10)
-
Added configuration parameter allowMissingPartitions to specify the behavior when incoming data contains new partition values that do not match any existing partitions. (2.00.10)
-
Added function
listRemotePlugins
to obtain a list of available plugins. Added functioninstallPlugin
to download a plugin. (2.00.10) -
Added configuration parameter volumeUsageThreshold to set the upper limit of the disk usage of a data node. (2.00.10)
-
Added function
writeLogLevel
to write logs of the specified level to the log file. (2.00.10) -
Added function
sessionWindow
to group time-series data based on the session intervals. (2.00.10) -
Added function
summary
to generate summary statistics of input data, including min, max, count, avg, std, and percentiles. (2.00.10) -
Added functions
encodeShortGenomeSeq
anddecodeShortGenomeSeq
to encode and decode DNA sequences. (2.00.10) -
Added function
genShortGenomeSeq
to perform DNA sequences encoding within a sliding window. (2.00.10) -
Added function
GramSchmidt
to implement the Gram–Schmidt orthonormalization. (2.00.10) -
Added function
lassoBasic
that has equivalent function tolasso
but takes vectors as input arguments. (2.00.10) -
Added 26 TopN functions: (2.00.10)
- m-functions:
mskewTopN
,mkurtosisTopN
- cum-functions:
cumsumTopN
,cumavgTopN
,cumstdTopN
,cumstdpTopN
,cumvarTopN
,cumvarpTopN
,cumbetaTopN
,cumcorrTopN
,cumcovarTopN
,cumwsumTopN
,cumskewTopN
,cumkurtosisTopN
- tm-functions:
tmsumTopN
,tmavgTopN
,tmstdTopN
,tmstdpTopN
,tmvarTopN
,tmvarpTopN
,tmbetaTopN
,tmcorrTopN
,tmcovarTopN
,tmwsumTopN
,tmskewTopN
,tmkurtosisTopN
- m-functions:
-
Added function
initcap
to set the first letter of each word in a string to uppercase and the rest to lowercase. (2.00.10) -
Added functions
splrep
andsplev
for cubic spline interpolation. (2.00.10) -
Added function
scs
to compute the optimal solution of linearly constrained linear or quadratic programming functions. (2.00.10) -
Added support for DECIMAL128 data type. (2.00.10)
-
Added function
rowPrev
,rowNext
,rowMove
,rowCumsum
,rowCumprod
,rowCummax
,rowCummin
androwCumwsum
for row-based calculations. (2.00.10) -
Added function
temporalSeq
to generate time series at specified intervals. (2.00.10) -
Added function
ungroup
to flatten columns containing fast array vectors or columnar tuples. (2.00.10) -
Added function
decimalMultiply
to multiply data of DECIMAL types. (2.00.10) -
Added functions
base64Encode
andbase64Decode
to encode and decode Base64 digits. (2.00.10) -
Added function
addFunctionTypeInferenceRule
to specify the inference rule of user-defined functions in DolphinDB JIT version. (2.00.10) -
Added support for COMPLEX data type in DolphinDB JIT version. (2.00.10)
-
Added configuration parameter localSubscriberNum to set the number of threads distributing the messages from the publish queue in local subscription. (2.00.10)
-
Added function
createStreamDispatchEngine
to create a streaming data dispatch engine. (2.00.10) -
DECIMAL data is supported in the time series engine and reactive state engine when the following functions are used: (2.00.10)
-
Time Series Engine (created with
createTimeSeriesEngine
):corr
,covar
,first
,last
,max
,med
,min
,percentile
,quantile
,std
,var
,sum
,sum2
,sum3
,sum4
,wavg
,wsum
,count
,firstNot
,ifirstNot
,lastNot
,ilastNot
,imax
,imin
,nunique
,prod
,sem
,mode
,searchK
-
Reactive State Engine (created with
createReactiveStateEngine
):cumsum
,cumavg
,cumstd
,cumstdp
,cumvar
,cumvarp
,cumcorr
,cumbeta
,cumcovar
,cumwsum
,cumwavg
,msum
,mavg
,mstd
,mstdp
,mvar
,mvarp
,mcorr
,mbeta
,mcovar
,mwsum
,mwavg
,tmsum
,tmavg
,tmstd
,tmstdp
,tmvar
,tmvarp
,tmcorr
,tmbeta
,tmwsum
,tmwavg
-
Improvements
- Added configuration parameter strictSecurityVerification to enable password strength checker and limit the number of failed login attempts. (2.00.10.10)
-
The permission object (parameter objs) can be specified as '*' when the access is applied at global level. (2.00.10.8)
-
When asynchronous cluster replication is enabled, operations on empty tables in the slave cluster will throw an exception. (2.00.10.8)
-
Optimized the write performance of TSDB engine. (2.00.10.4)
-
Optimized the performance of function
dropTable
when deleting a partitioned table with over 100,000 partitions. (2.00.10.4) -
The divisor of
div/mod
now can be negative numbers. (2.00.10.4) -
A new directory will be created automatically if the configured persostenceOffsetDir cannot be found. (2.00.10.4)
-
Long-running replay tasks can now be canceled more promptly. (2.00.10.4)
-
Optimized transactions on compute nodes. (2.00.10.2)
-
Added parameter keepRootDir to function
rmdir
to specify whether to keep the root directory when deleting files. (2.00.10.2) -
The
license
function obtains license information from memory by default. (2.00.10.2) -
The
getClusterDFSTables
function returns all tables created by the user regardless of the table permissions. (2.00.10.2) -
An empty table can be backed up by copying files. (2.00.10.2)
-
Optimized asynchronous replication (2.00.10.2):
- After asynchronous replication is enabled globally, the system now allows operations on slave cluster databases which are not included in the replication scope.
- The mechanism for pulling replication tasks from the master to the slave clusters has been improved.
-
<DataNodeNotAvail> error message now provides more details. (2.00.10.2)
-
Optimized the output log of
subscribeTable
. (2.00.10.2) -
Optimized the performance of concurrent read and write operations for the TSDB engine. (2.00.10.2)
-
A user-defined function allows the default value of a parameter to be an empty tuple (represented as []). (2.00.10.1)
-
Added user access control to the
loadText
function. (2.00.10.1) -
Modifications made to user access privileges are logged. (2.00.10.1)
-
The resample function can take a matrix with non-strictly increasing row labels as an input argument. (2.00.10.1)
-
Optimized the join behavior for tuples. (2.00.10.1)
-
A ternary function can be passed as an input argument to the template accumulate in a reactive state engine. (2.00.10.1)
-
Added parameter validation to
streamEngineParser
: If triggeringPattern='keyCount', then keepOrder must be true. (2.00.10.1) -
Configuration parameters localExecutors and maxDynamicLocalExecutor were discarded. (2.00.10)
-
Functions
window
andpercentChange
can be used as state functions in the reactive state engine. (2.00.10) -
Support JOIN on multiple partitioned tables. (2.00.10)
-
Optimized the performance when using the
dropTable
function to delete a table with a large number of partitions. (2.00.10) -
Optimized the performance when filtering data with a WHERE clause in a TSDB database. (2.00.10)
-
Optimized the performance when joining tables of a TSDB database. (2.00.10)
-
Enhanced support for ANSI SQL joins. The join column can be any column from tables or the column that is applied with functions or filtered by conditional expressions. (2.00.10)
-
Support LEFT JOIN, FULL JOIN, and INNER JOIN on two tables with one table's join column of STRING type and the other table's of integral type. (2.00.10)
-
Support SELECT NOT on DFS tables. (2.00.10)
-
Support SQL keywords in all uppercase or lowercase. (2.00.10)
-
Support comma (,) to CROSS JOIN tables. (2.00.10)
-
Support line breaks for SQL statements, while keywords with multiple words, such as ORDER BY, GROUP BY, UNION ALL, INNER JOIN, cannot be split into two lines. (2.00.10)
-
The implementation of
select * from a join b
is changed fromselect * from join(a, b)
toselect * from cj(a, b)
. (2.00.10) -
Support operator
<>
in SQL statements, which is equivalent to!=
. (2.00.10) -
Support keyword NOT LIKE in SQL statements. (2.00.10)
-
When LEFT JOIN, LEFT SEMI JOIN, RIGHT JOIN, FULL JOIN or EQUI JOIN on columns containing NULL values: (2.00.10)
- In the previous versions: a NULL value is matched to another NULL.
- Since the current version: a NULL value cannot be matched to another NULL.
-
For function
sqlDS
, a DFS table partitioned by DATEHOUR selected in sqlObj will now be correctly filtered by date. (2.00.10) -
Optimized file merging for TSDB engine to reduce memory consumption. (2.00.10)
-
Optimized storage architecture for TSDB engine with less blocks to reduce memory usage. (2.00.10)
-
Added new parameters defaultValues and allowNull for function
mvccTable
to set the default values for columns and determine whether its columns can contain NULL values, respectively. It is now supported to modify column names and types, and delete columns of MVCC tables. (2.00.10) -
For the "Status" column returned by function
getRecoveryTaskStatus
, the previous status "Finish" is now changed to "Finished", "Abort" to "Aborted". (2.00.10) -
Optimized graceful shutdown, before which all symbol bases will be flushed to the disk. (2.00.10)
-
Added inplace optimization fields, i.e., inplaceOptimization and optimizedColumns, when using
HINT_EXPLAIN
to check the execution plan of a GROUP BY clause when algo is "sort". (2.00.10) -
Function
addColumn
now can add a column of DECIMAL type. (2.00.10) -
Optimized the performance when performing a point query on a table containing array vectors. (2.00.10)
-
Optimized the execution logic for TSDB engine when compaction and partition drop are executed at the same time. (2.00.10)
-
Added check for duplicated column names when updating column names with function
rename!
. (2.00.10) -
The column name specified with the
rename!
,replaceColumn!
,dropColumns!
functions are no longer case sensitive. (2.00.10) -
Added new parameters swColName and checkInput for the
lasso
andelasticNet
functions to specify the sample weight and validation check, respectively. Added new parameters swColName for theridge
function. (2.00.10) -
Added parameters x0, c, eps, and alpha for function
qclp
to specify absolute value constraints, solving accuracy, and relaxation parameters. (2.00.10) -
Functions
loadText
,pLoadText
, andextractTextSchema
now can load a data file that contains a record with multiple newlines. (2.00.10) -
The delimiter parameter of the
loadText
,pLoadText
,loadTextEx
,textChunkDS
,extractTextSchema
functions can be specified as one or more characters. (2.00.10) -
When importing a table using function
loadTextEx
, an error will be reported if the table schema does not match the schema of the target database. (2.00.10) -
Added check for the schema parameter of function
loadTextEx
. Since this version, the table specified by schema MUST NOT be empty, and the "name" and "type" columns must be of STRING type. (2.00.10) -
An error will be reported when importing a table via function
loadTextEx
to an OLAP database with tables containing array vectors or BLOB columns. (2.00.10) -
Added new parameter tiesMethod, which is used to process the group of records with the same value, for the following moving TopN functions:
mstdTopN
,mstdpTopN
,mvarTopN
,mvarpTopN
,msumTopN
,mavgTopN
,mwsumTopN
,mbetaTopN
,mcorrTopN
,mcovarTopN
. (2.00.10) -
The following functions support columnar tuple:
rowWavg
,rowCorr
,rowCovar
,rowBeta
, androwWsum
. (2.00.10) -
Optimized the prediction performance of function
knn
. (2.00.10) -
The time series engine and daily time series engine now can output columns holding array vectors. (2.00.10)
-
Optimized the performance of the
moving
function used in the reactive state engine. (2.00.10) -
The anomaly detection engine now can specify multiple grouping columns for parameter keyColumn. (2.00.10)
-
The window parameter of function
genericStateIterate
now can be specified as 1. The performance is optimized when window is specified as 0 or 1. (2.00.10) -
Added new parameter sortByTime for the
createWindowJoinEngine
andcreateAsOfJoinEngine
functions to determine whether the result is returned in the order of timestamps globally. (2.00.10) -
Added check for the T parameter of function
genericTStateIterate
, which must be strictly increasing. (2.00.10) -
The streaming engine can now be shared with the
share
function/statement for concurrent writes. (2.00.10) -
An error will be reported when using the left semi join engine to subscribe to a table containing array vectors. (2.00.10)
-
An error will be reported when using the
share
function/statement or theenableTableShareAndPersistence
function to share the same table multiple times. (2.00.10) -
An error will be reported if the data of INT type is appended to a SYMBOL column of the left table of a window join engine. (2.00.10)
-
Support pickle serialization of array vectors of UUID, INT128, and IP types. (2.00.10)
-
DolphinDB JIT version supports the
join
operator (<-). (2.00.10) -
The
isort
function in JIT version can take a tuple with vectors of equal length as input. (2.00.10) -
The
if
expression in JIT version supports thein
operator. (2.00.10) -
Vectors can be accessed with Boolean index in JIT version. (2.00.10)
-
Support comments with multiple /**/ sections in one line. (2.00.10)
-
The function
stringFormat
now supports: data type matching, format alignment, decimal digits, and base conversion. (2.00.10) -
The second parameter of function
concat
can be NULL. (2.00.10) -
Function
take
can take a tuple or table as input. (2.00.10) -
Function
stretch
can take a matrix or table as input. (2.00.10) -
Functions
in
andfind
support table with one column. (2.00.10) -
When the parameter moduleDir is configured as a relative path, the system searches the modules under the homeDir/modules directory. (2.00.10)
-
The result of function
in
,binsrch
,find
, orasof
takes the same format as the input argument Y. (2.00.10) -
An error is raised when passing a tuple to function
rank
. (2.00.10)
Issues Fixed
-
[D20-18827] Fixed an error of
parseJsonTable
when parsing JSON objects containing\\\"
. (2.00.10.15) -
[D20-18935] Fixed incorrect results returned by aggregate functions such as
wsum
andwavg
with a scalar and an empty array as inputs. (2.00.10.15) -
Fixed potential security bugs. (2.00.10.13)
- The controller could crash during startup if it continued to receive login requests from other nodes. (2.00.10.9)
- For local multi-threaded subscription, publishing data at excessively high rates could overwhelm local subscription queues, preventing reception of new messages and leading to data loss. (2.00.10.9)
-
Executing the
login
andgetDynamicPublicKey
functions with high concurrency could cause the server to crash. (2.00.10.8) -
The
bar
function incorrectly grouped data spanning multiple days from a DFS table when the parameter closed is set to 'right'. (2.00.10.8) -
The
ParseJsonTable
function converted JSON null values of string type into the literal "NULL" rather than empty values. (2.00.10.8) -
Overly-large BLOB fields from persisted stream tables can lead to substantial data loads into memory, even when a small preCache value was configured. (2.00.10.8)
-
An error occurred if a nested aggregate function was used with a
group by
clause when querying data from an in-memory table. (2.00.10.8) -
For concurrent asynchronous replication, the controller of the slave cluster failed to assign tasks in rare cases. (2.00.10.8)
-
In rare occasions, queries submitted through the web-based cluster manager failed, displaying the error: "connection closed, code: 1006."(2.00.10.7)
-
When parsing a JSON string, if the first 10 rows of a field were all NULLs, the
parseJsonTable
function returned an incorrect parsing result.(2.00.10.7) -
Using function
pack
led to memory leaks. (2.00.10.6) -
Executing
cross(func, a, b)
could cause the server to crash if the size of a or b was too large. (2.00.10.6) -
Using function
unpack
led to memory leaks. (2.00.10.5) -
If the func parameter of function
withNullFill
was theor
operator, incorrect results were returned when its operands were Boolean values. (2.00.10.5) -
The
limit
clause did not take effect when the grouping column was sortColumns. (2.00.10.4) -
Data contention when updating a table schema led to OOM problem and server crash. (2.00.10.4)
-
The backup might get stuck when the backup directory (backupDir) is on NFS. (2.00.10.4)
-
The memory access out of bounds error occured when attempting to close a connection that was created after setting the maximum number of connections using
setMaxConnections
. (2.00.10.4) -
When joining partitioned tables using a statement that did not conform to SQL standards, referencing a column from the left table in the
where
clasue caused the server to crash. (2.00.10.4) -
If creating an IPC in-memory table failed, creating another one with the same name caused the server to crash. (2.00.10.4)
-
An error was reported when the filtering condition in a distributed query contained a comparison between operands of SECOND and INT type. (2.00.10.4)
-
The SYMBOL type in an IPC in-memory table was not compatible with the STRING type. (2.00.10.4)
-
An "unrecognized column" error was raised when the system was executing a distributed query which: (1) involved a reduce phase; (2) queried data on remote nodes. This issue was introduced in the 2.00.10 version. (2.00.10.3)
-
Setting user access in a high-availability cluster led to memory leaks on the controller. (2.00.10.2)
-
The
parseExpr
function failed to parse the empty value "{}" in a JSON object. (2.00.10.2) -
When passing a stream table to the parameter dummyTable of function
createReactiveStateEngine
, accessing the engine handle caused a disconnection. (2.00.10.2) -
An OOM error occurred when writing to TSDB databases in a single-machine cluster, causing inconsistent transaction states. (2.00.10.2)
-
An error message "getSubChunks failed, path'/xx' does not exist" was reported when restoring data to a newly-created database. (2.00.10.2)
-
The elements accessed based on labels by
loc
function were incorrect. This issue was introduced in version 2.00.10. (2.00.10.2) -
Scale loss occurred when restoring DECIMAL data. (2.00.10.2)
-
If the parameter atomic of function
database
was set to 'CHUNK', the versions of metadata on the controller and data nodes may be inconsistent if a transaction involved multiple chunks. (2.00.10.2) -
Passing a non-string variable to the parameter label of function
interval
crashed the server. (2.00.10.2) -
For a table partitioned by temporal values, queries with where conditions on the partitioning column were slow. This issue was introduced in version 2.00.10. (2.00.10.2)
-
The overflowed intermediate result of function
mprod
caused server crash. (2.00.10.2) -
The result of
in(X,Y)
was incorrect when Y was a set that contains a LONG value with more than 11 digits. (2.00.10.2) -
Concurrent execution of restore (and other) transactions may result in inconsistent metadata after server restart. (2.00.10.2)
-
The reactive state engine returned incorrect results when calculating
genericStateIterate
on input data with over 1024 groups. (2.00.10.2) -
Using a user-defined function with the "@JIT" identifier to query a DFS table caused server crash. (2.00.10.2)
-
On Windows, the files function returned inaccurate fileSize values for files exceeding 2 GB. (2.00.10.1)
-
In a high-availability cluster, if an error occurred during serialization when using
addFunctionView
, the function was not cleared from memory. (2.00.10.1) -
In a high-availability cluster, adding a function view containing plugin methods to a controller caused failures in other controllers. (2.00.10.1)
-
Users with DB_MANAGE privilege failed to grant permissions to other users. (2.00.10.1)
-
Adding a node may cause backup errors. (2.00.10.1)
-
Queries on DFS tables using COMPO partitioning may cause data loss if the query: (2.00.10.1)
-
Did not use aggregate functions, order-sensitive functions, row reduce functions (such as
rowSum
), or fill functions (such asffill
) in the select statement. -
Used one of the partitioning columns (except the last one for COMPO partitioning) as a pivot-by column.
-
-
Parsing errors occurred in certain cases using and not like(id, '%a'), not like, not in, or not between. This bug was introduced in version 2.00.10. (2.00.10.1)
-
If an error occurred in a symbol base file, reloading the file caused server crash. (2.00.10.1)
-
Specifying a tuple containing functions or expressions with multiple returns for the metrics parameter of
createReactiveStateEngine
caused the server to crash. (2.00.10.1) -
[D20-11604] Fixed unexpected results returned by functions
mstd
,mstdp
,mvar
, andmvarp
when processing consecutive identical numbers (non-DECIMAL) within a window due to floating-point precision issues. A precision check has now been added and the return value is 0 in such cases. (2.00.10) -
When querying a large DFS table using the SQL keyword TOP or GROUP BY, an error was potentially raised. (2.00.10)
-
When a SQL query specified a column name that couldn't be recognized, the error message returned contained an incorrect column name instead of the actual unrecognized column name from the query. (2.00.10)
-
Failures to write to a partition of a DFS table with many columns could cause the server to crash. (2.00.10)
-
Concurrently loading and deleting multiple tables in a database could cause subsequent
loadTable
operations to fail with an error reporting it cannot find the .tbl file. (2.00.10) -
The
head
andtail
functions could not be used in aggregate functions. This bug was introduced in DolphinDB 2.00.6. (2.00.10) -
A deadlock could occur when concurrently renaming a dimension table via
renameTable
and querying the same table. (2.00.10) -
When querying a table with a large number of partitions using a SQL query with BETWEEN...AND... for partition pruning, the error
could be raised. (2.00.10)The number of partitions [xxxxx] relevant to the query is too large
-
When using the TSDB storage engine and setting keepDuplicates=LAST on a table, the
UPDATE
statement behaved in a case-sensitive manner for column names. Starting in this release, column names is handled in a case-insensitive manner. (2.00.10) -
Using calculations or functions in a CASE WHEN condition could crash the server. (2.00.10)
-
Using the DISTINCT keyword in SQL queries could return incorrect results. (2.00.10)
-
The server could crash when the TSDB storage engine encountered an OOM error while writing data from memory to disk. (2.00.10)
-
Attempting to write STRING data exceeding 256 KB in length to a table using the TSDB storage engine failed with the error
TSDBEngine failed to deserialize level file zonemap
. (2.00.10) -
When querying a VALUE or RANGE partitioned DFS table, if the SELECT clause and GROUP BY clause both applied the same time conversion function (e.g.
date()
) to the partitioning column, but used different aliases for that column, incorrect results could be returned. (2.00.10) -
When deleting data from a partitioned table using a SQL DELETE statement, if all nodes storing the replicas for the relevant partition were offline, the error
chunktype mismatched for path
could be raised. (2.00.10) -
The use of local executors could lead to deadlock situations during task scheduling. (2.00.10)
-
In the DolphinDB JIT version, when appending large amounts of data to a reactive state engine (
createReactiveStateEngine
) that used user-defined functions, incorrect results could be returned. (2.00.10) -
A deadlock may occur when
unsubscribeTable
was called from multiple nodes simultaneously. (2.00.10) -
Server crashed when the capitalization of the column names specified in metrics and input tables of a left semi join engine (
createLeftSemiJoinEngine
) was inconsistent. (2.00.10) -
Server crashed when appending data to a stream table and persisting the table at the same time. (2.00.10)
-
If the metrics of
createWindowJoinEngine
specified a column name alias, incorrect aggregate results were returned. (2.00.10) -
After
DROP table
was called to delete a stream table, the table could not be deleted or unsubscribed from. (2.00.10) -
Syntax parsing issues: statements such as
"/" == "a"
could not be parsed correctly. (2.00.10) -
An additional column was output when the second parameter of function
ols
consisted solely of 0. (2.00.10) -
The join results of DECIMAL data were incorrect. (2.00.10)
-
Server crashed due to parsing failure when the parameter aggs of function
wj
was not compliant. (2.00.10) -
The result of function
expr
was incorrect if a DATEHOUR argument was passed. (2.00.10) -
The web interface could not be accessed properly if the parameter webLoginRequired was configured to true. (2.00.10)
-
Incorrect results were returned when using
cast
to convert SYMBOL data. (2.00.10) -
Function
nullFill
failed to fill the NULL values returned by functionbucket
. (2.00.10) -
Precision loss occurred after applying
unpivot
to a column of DECIMAL type. (2.00.10) -
When a user-defined anonymous aggregate function was called with
twindow
in another user-defined function, an errorfunc must be an aggregate function.
was raised. (2.00.10) -
When a DolphinDB process was started, server crashed if a script (as configured with parameter run) containing function
submitJob
was executed. (2.00.10)