SQL Anywhere Bug Fix Readme for Version 8.0.2, build 4308
Contents
A subset of the software with one or more bug fixes. The bug fixes are
listed in the "readme" file for the update. A Bug Fix update may only be
applied to installed software with the same version number.
Some testing has been performed on the software, but full testing has not
been performed. Customers are discouraged from distributing these files
with their application unless they verify the suitability of the software
themselves.
A subset of the software with one or more bug fixes. The bug fixes are
listed in the "readme" file for the update. A Bug Fix update may only be
applied to installed software with the same version number.
Full testing has been performed on the software.
A complete set of software that upgrades installed software from an older
version with the same major version number (version number format is
major.minor.patch). Bug fixes and other changes are listed in the "readme"
file for the upgrade.
For answers to commonly asked questions please use the following URL:
Frequently asked Questions
The following is a list of bugs fixed in this collection of software,
as compared to the 8.0.2 Maintenance Release.
================(Build #4233 - Engineering Case #313686)================
If a stored prcedure executed a RAISERROR statement to signal a user defined
error, the provider would not have returned the exception to the application.
Instead, the application would have received the error "object reference
not set to instance of an object". The correct RAISERROR message will now
be returned.
================(Build #4233 - Engineering Case #313788)================
When using the Managed provider, columns with a TIME datatype are mapped
to .NET TimeSpan DataRows when VS.NET generates strong-typed DataSet classes
for XSD files. The provider was failing to fill the DataSet because the it
was returning DateTime values for the TIME columns and TimeSpan values could
be cast to DateTime values. This was fixed by converting TIME columns to
TimeSpan values and adding a new method GetTimeSpan to the AsaDataReader
class.
================(Build #4233 - Engineering Case #313797)================
An InvalidCastException would have been thrown by the .NET Common Language
Runtime when generating an UPDATE command that had a SELECT sub-querey which
returned decimal columns. This has been fixed.
================(Build #4233 - Engineering Case #313803)================
When executing a parameterized query to insert a row into a table, if a parameter
value was of type DECIMAL and the precision was not specified, an incorrect
value was inserted into the table. Fixed by setting the precision to 30 for
decimal values with an unspecified precision.
================(Build #4240 - Engineering Case #315667)================
When a Smart Device application, that had opened a Data Reader, terminates
without closing the connection or the Data Reader, a managed AsaException
and a native exception would have occurred. This is now fixed by closing
the Data Reader when the connection is closed.
================(Build #4240 - Engineering Case #316908)================
The version number reported by the Managed Provider dll did not contain the
build number. The number displayed was always 8.0.2.0. This has now been
fixed to also display the build number.
================(Build #4244 - Engineering Case #316467)================
If ExecuteNonQuery was called in an ASACommand object to drop a table that
didn't exist, an error would have occurred. After that the AsaCommand object
could no longer have been used. This has now been fixed.
================(Build #4250 - Engineering Case #317705)================
A System.InvalidCastException was thrown when calling AsaDataAdapter.Fill,
if a column value was null and the type of the corresponding DataColumn was
different from the type of the database column. Fixed by returning a DBNull.Value
if the column is null.
================(Build #4253 - Engineering Case #318823)================
A NullReferenceException would have been thrown when creating a new connection
if dbdata8.dll was not found in the iAnywhere.Data.AsaClient.dll's directory
or the application's working directory and the registry key(HKEY_LOCAL_MACHINE\Software\Sybase\Adaptive
Server Anywhere\8(9).0\Location) was not found. This is now fixed.
================(Build #4255 - Engineering Case #319390)================
.NET application may not have worked properly if the version of native dll
(dbdata8.dll) did not match the version of managed dll (iAnywhere.Data.AsaClient.dll).
Now, when the managed dll loads the native dll, it checks the version. If
the native dll's version is not the same as the managed dll's version, an
error message is displayed.
================(Build #4260 - Engineering Case #317916)================
When a program using ADO.NET is built, the version number including the build
number of the iAnywhere.Data.AsaClient.dll is stored in the program. When
an EBF was installed, the old version of the data provider dll was replaced
with a newer one, and the program would then not have load correctly. This
has been fixed by installing a 'publisher policy' file that tells the .NET
Framework to map older versions of the data provider dll to the newly installed
version.
================(Build #4265 - Engineering Case #321355)================
When executing a command to update numeric fields with parameters, the AsaClient
needs to convert .Net numeric values to ASA numeric values. If the scale
of a numeric value was incorrect, the command would have failed and an exception
thrown. This has been fixed.
================(Build #4273 - Engineering Case #322580)================
With this change, the AsaCommand.Cancel method is now supported. An application
can now execute some command in one thread and cancel the command in another
thread.
================(Build #4278 - Engineering Case #323194)================
Normally an application using the ADO.NET provider will create an AsaCommand
object (representing the query to be executed), obtain an AsaReader (representing
the cursor for that statement), fetch the rows, call AsaReader.Close() and
then delete the AsaCommand object. If the AsaCommand object was deleted before
calling AsaReader.Close(), it was possible that the provider would have attempted
to send an invalid statement handle to the server when trying to drop the
statement. If done repeatedly, this could have resulted in the resource governor
for prepared statements being exceeded, as well as wasted resources on the
server. The statement is now dropped correctly.
================(Build #4278 - Engineering Case #323683)================
On Windows CE devices, the exception, TypeLoadException, would have been
thrown if dbdata8/9.dll was not found in the directory containing iAnywhere.Data.AsaClient.dll.
This is now fixed.
================(Build #4285 - Engineering Case #322776)================
The error "System.ArgumentException: Cannot change DataType of a column
once it has data" would have been thrown if a DataSet was reused by resetting
the DataTables of the DataSet and refilling. This problem has been fixed.
================(Build #4294 - Engineering Case #325292)================
The sample program simplece, may have crashed with a Fatal Application Error,
when run on a Windows CE .NET 4.1 device. This has been fixed.
================(Build #4294 - Engineering Case #325713)================
When running on Windows CE, the AsaClient would have failed with an unhandled
exception on opening a database connection, if the code-page of the database
was not supported by the device. Now the AsaClient will check the code page
of the database and if it is not installed on the device, the AsaClient will
throw an exception to inform the application.
================(Build #4297 - Engineering Case #327298)================
A VersionNotFoundException would have been thrown when updating a deleted
row with the DataAdapter without first setting the DataRowVersion property
of DeleteCommand's parameters to 'Original'. This problem is now fixed.
================(Build #4297 - Engineering Case #327306)================
A NullReferenceException would have occurred when calling ExecuteScalar if
the command did not return a resultset. This problem has been fixed.
================(Build #4297 - Engineering Case #327312)================
When using the AsaDataAdapter to fill a DataTable that included a varbinary
column with a value of empty string, an ArgumentNullException was thrown.
The AsaClient was failing to convert the binary column to a byte array for
empty strings. This has been fixed by returning a zero-length byte array
for empty binary values.
================(Build #4302 - Engineering Case #327840)================
When updating data with a data adapter, DBConcurrency was not thrown if an
attempt to execute a DELETE, INSERT or UPDATE statement resulted in no rows
being affected. This problem has now been fixed.
================(Build #4303 - Engineering Case #328669)================
The Serializable attribute was not supported for AsaException, AsaError and
AsaErrorCollection. The Serializable attribute has now been added.
================(Build #3607 - Engineering Case #303124)================
This change resolves a number of sporatically problems which could have been
seen:
- failing to connect to the utility_db running on a network server
- failing to connect to a server while using a FILEDSN (or any ODBC DSN
on UNIX)
For either of the above to have occurred, the file containing the utility_db's
dba password (util_db.ini) or the FILEDSN, or the UNIX ODBC DSN must have
had the following properties:
- the DSN being used was the last one in the file
- the last line of the DSN was the last line in the file
- the last line of the file was either not terminated, or terminated with
a single line-feed char (i.e. not carriage-return/line-feed as on Windows
platforms)
For example, the client could have failed to connect to the utility_db on
the first attempt, but be successful if the exact same client connect string
was specified a second (or third) time. Then if tried immediately again
it could have failed again, etc.
This has been fixed, but a work-around is to add a blank line to the end
of the file.
================(Build #4076 - Engineering Case #298957)================
When a procedure which did not return a result set was opened in Embedded
SQL using EXEC SQL OPEN, the procedure was executed and the SQLE_PROCEDURE_COMPLETE
warning was returned. After receiving this warning, attempting to close
the cursor gave a SQLE_CURSOR_NOT_OPEN error, and attempting to open a cursor
with the same name gave a SQLE_CURSOR_ALREADY_OPEN error. Now the cursor
is in the closed state if the open returns SQLE_PROCEDURE_COMPLETE, and opening
a cursor with the same name will succeed.
================(Build #4215 - Engineering Case #309294)================
When specifying connection parameters, if one of them was the "Encryption"
parameter and it had an incorrect value, then an unhelpful error message
is reported.
Example:
LINKS=TCPIP;ENC=ECC-TLS(trusted_certificates=sample.crt)
would have displayed the following message:
Parse error: Bad value near ''
This has been fixed. The following message is now displayed:
Parse error: Bad value near 'ECC-TLS(trusted_certificates=sample.crt)'
================(Build #4265 - Engineering Case #321289)================
If an integrated login attempt failed because of a communications error,
the client application could have crashed. This would only have happened
if the connection string, (or data source if the DSN parameter was used),
contained all of Integrated=YES, Userid and Password. This has been fixed.
================(Build #4289 - Engineering Case #326018)================
Fetching an ESQL DT_STRING, DT_DATE, DT_TIME, or DT_TIMESTAMP host variable
(or type array of char) where the length of the char array was one, did not
add the null character. This has been fixed so that the null character is
set (note no data is actually copied into the host variable's character array
other than the null character, since there is only space for the null character).
Fetching an ESQL DT_STRING with length zero on a blank padded database could
have caused dblib to crash. This case was only possible when using an SQLDA,
and has been fixed so that no data is copied (not even the null character).
================(Build #4292 - Engineering Case #326342)================
If a batch or procedure returned multiple result sets and one of the result
sets generated a warning, then the JDBC driver would have failed to return
the remainder of the result sets. This problem has now been fixed.
================(Build #4303 - Engineering Case #322867)================
When executing a batch that returned multiple result sets with differing
schema and connected via the iAnywhere JDBC Driver, fetching the second or
subsequent result set could have resulted in conversion or truncation errors.
For example, the following batch would have failed to return the second result
set correctly.
drop table test;
create table test( c long varchar );
insert into test values ('test');
BEGIN
select 123;
SELECT c from test;
END
This problem has now been fixed.
A second problem with multiple result sets was also fixed. If the previous
result set was closed, then the call to determine if there were additional
result sets would have failed, making it seem like there were no more result
sets for the statement. This problem has also been fixed.
================(Build #4081 - Engineering Case #296731)================
When fetching data as Unicode strings, SQLGetData could have returned the
wrong value for the string length, when the database character set was UTF8.
This would only have occurred when the buffer being fetched into was larger
than the UTF8 string length, but smaller than the string length once it was
converted to Unicode. This has been corrected.
================(Build #4090 - Engineering Case #302303)================
An ODBC application could have crashed, if it called SQLGetDiagRec or SQLGetDiagRecW
with the record number parameter greater than 2 and there were three or more
diagnostic records. This has been fixed.
================(Build #4090 - Engineering Case #302554)================
SQLColAttributes and SQLColAttribute would have returned the type as "char"
for the following column types:
LONG VARCHAR
BINARY
VARBINARY
LONG VARBINARY
BIGINT
TINYINT
BIT
GUID
The correct type name is now returned.
================(Build #4095 - Engineering Case #302555)================
Calling the ODBC function SQLGetInfo() for SQL_OJ_CAPABILITIES would not
have reported that "Full outer joins are supported" (SQL_OJ_FULL). SQL_OJ_FULL
has now been added to the list of outer join capabilities.
================(Build #4095 - Engineering Case #303304)================
Calling the ODBC function SQLGetInfo() for SQL_GROUP_BY, would have incorrectly
returned the capability SQL_GB_NO_RELATION, (The columns in the GROUP BY
clause and the SELECT list are not related). Now, the correct capability
SQL_GB_GROUP_BY_CONTAINS_SELECT is returned, (The GROUP BY clause must contain
all non-aggregated columns in the select list).
================(Build #4208 - Engineering Case #305454)================
When positioning to the last row in a result set containing only 1 row, the
RowCount value reported would have been 0, when it should have been 1. This
could have occurred using SQLExtendedFetch() or SQLFetchScroll() with a FetchOrientation
of SQL_FETCH_LAST, or with the RDO MoveLast, and a result set with exactly
one row.
The problem has been fixed.
================(Build #4211 - Engineering Case #306170)================
In a Visual Basic RDO application, when updating columns in a result set,
the updates fail after the first rowset has been processed. A MoveNext would
fail with an error "Not Enough fields allocated in SQLDA". The problem
has been fixed.
================(Build #4212 - Engineering Case #308449)================
The ODBC driver may have occasionally reported the lengths of strings incorrectly,
when dealing with multibyte character sets. The data was returned correctly,
but the length would sometimes be too long. The length reported should now
be correct.
================(Build #4215 - Engineering Case #309298)================
When using SQLGetData to fetch Unicode strings, the indicator could occasionally
have been incorrect, multiple columns were being fetched from the same row,
and if the strings were being fetched in pieces. This has been fixed.
================(Build #4216 - Engineering Case #302959)================
A query with a proxy table and an alias in the select list may have caused
the server to crash or hang with 100% cpu utilization. This is now fixed.
================(Build #4224 - Engineering Case #310019)================
The returned value for SQL_DBMS_VER was only displaying the major version
of the server that the driver was connected to. For example, for ASA 7.0.4,
the string returned was "07.00.0000" and, for Adaptive Server IQ 12.4.3,
the string "12.00.0000" was returned.
This has been corrected. For Adaptive Server Anywhere version 7.0.4 the
string returned for SQL_DBMS_VER is now "07.00.0004". For Adaptive Server
IQ version 12.4.3, the string "12.04.0003" will be returned.
================(Build #4230 - Engineering Case #312857)================
A call to SQLMoreResults was returning SQL_NO_DATA_FOUND, instead of SQL_ERROR,
when a batch contained a statement with an error. According to the ODBC specification,
if one of the statements in a batch fails, and the failed statement was the
last statement in the batch, SQLMoreResults should return SQL_ERROR. This
has been corrected.
================(Build #4246 - Engineering Case #317279)================
The Windows CE ODBC driver was using the connection parameters from the first
connection's FILEDSN for future connections even, if a different FILEDSN
was subsequently used. This has been fixed.
================(Build #4250 - Engineering Case #318137)================
A shared memory connection could have failed on Windows CE platforms after
unloading and reloading dbodbc8.dll.
For example, if a client application:
1) made a shared memory connection
2) disconnected
3) unloaded dbodbc8.dll
4) reloaded dbodbc8.dll again
5) attempted another shared memory connection within a few seconds of the
previous disconnect
Then the second connection attempt could fail. A debug log (generated using
the LOGFILE connection parameter) would indicate that shared memory could
not be started. This has been fixed.
================(Build #4259 - Engineering Case #320516)================
In a datasource file on Windows CE, the long form of a connection parameter
could not have been specified, only the short form. For example, DATABASEFILE
could have been specified, but not DBF. This has been fixed so that either
form can now be specified for all parameters.
================(Build #4259 - Engineering Case #320581)================
When using SQLGetData on a SQL_WCHAR column, fetched from a UTF8 database,
the resultant indicator value was incorrect if the data was obtained in chunks.
When using SQLGetData to convert binary data to a SQL_WCHAR column, the
resultant indicator value was incorrect.
For binary to wide character conversions, if the data was fetched in chunks,
the pieces would be placed in the wrong location in memory. The terminating
NULL wide character was placed in the wrong location in memory.
For binary to wide character conversions, the call to the translation DLL
passed the wrong type, SQL_CHAR instead of SQL_WCHAR, for wide characters.
In some cases, it passed the wrong length as well.
These problems have been corrected.
================(Build #4276 - Engineering Case #323291)================
If a cursor had both bound columns and used getdata statements, and the first
column was not bound but accessed using a getdata, then the fetch may have
returned no error or warning when it should have returned SQL_NOTFOUND or
SQL_NO_DATA. This problem was more likely when the DisableMultiRowFetch
parameter was used. Attempting to do a getdata when past the end of the
result set would have failed with a "No current row of cursor" error.
Also, if a cursor had both bound columns and used getdata statements, and
the DisableMultiRowFetch connection parameter is used, poor performance could
have occurred.
Both of these problems have been fixed.
================(Build #4282 - Engineering Case #324841)================
When a procedure returns multiple result sets, the SQLMoreResults() function
is used in ODBC to move to the next result set. If that next statement returned
a warning, SQLMoreResults would have completed without opening the result
set. The problem is now fixed, and SQLMoreResults will return SQL_SUCCESS_WITH_INFO
indicating that the result set is open, but that a warning was returned.
================(Build #4293 - Engineering Case #326460)================
When using ODBC and fetching multi-byte characters from a UTF8 database,
a truncation error could have occurred, even though the correct target buffer
length and type were set in SQLBindCol. The ODBC driver used the length of
the user's buffer as the length of its own internal buffer. Since one UTF8
character can occupy 1, 2, 3, 4 or more bytes, a larger internal buffer is
required. This has been fixed.
================(Build #4293 - Engineering Case #326462)================
SQLGetDescField and SQLColAttribute were returning SQL_FALSE for the SQL_DESC_UNSIGNED
attribute for non-numeric types such as SQL_CHAR, implying that these were
signed types. The ODBC standard states that: "This read-only SQLSMALLINT
record field is set to SQL_TRUE if the column type is unsigned or non-numeric,
or SQL_FALSE if the column type is signed." Also, SQL_TINYINT was treated
as signed when ASA does not support a signed tinyint.
This has been corrected. The following types are now considered signed.
SQL_C_NUMERIC or SQL_NUMERIC
SQL_DECIMAL
SQL_C_FLOAT or SQL_REAL
SQL_C_DOUBLE or SQL_DOUBLE
SQL_BIGINT
SQL_C_SBIGINT
SQL_C_LONG or SQL_INTEGER
SQL_C_SLONG
SQL_C_SHORT or SQL_SMALLINT
SQL_C_SSHORT
The rest are unsigned types including SQL_C_TINYINT or SQL_TINYINT and SQL_C_STINYINT
since ASA does not support a signed tinyint.
================(Build #4295 - Engineering Case #326962)================
Connecting to the utility database using the iAnywhere JDBC Driver, when
running in a Japanese environment, and then attempting to execute a CREATE
DATABASE command with Japanese characters in the database name, would have
failed with an error that the database '??' could not be created. A similar
error would have occurred when using Sybase Central connnected via the iAnywhere
JDBC Driver. This problem has now been corrected.
================(Build #4076 - Engineering Case #270447)================
Updating, inserting or deleting through an ADO client side cursor would have
failed with the error: "Insufficient base table information for updating
or refreshing". This is now fixed.
================(Build #4076 - Engineering Case #286173)================
When viewing table data through the Visual Studio .Net Server Explorer, character
data had an extra byte (usually a square box) appended to it. This has been
fixed.
================(Build #4076 - Engineering Case #287854)================
Using ADO and fetching backward through a cursor (using MovePrevious) until
hitting the start of the result set (BOF) and then fetching forward (using
MoveNext) would have incorrectly set EOF on the first MoveNext after hitting
BOF. This is now fixed.
================(Build #4076 - Engineering Case #298526)================
When using the OLEDB driver to retrieve schema rowsets, the rowset properties
were sometimes incorrect and could have (depending on the application) caused
every second row to be missing. The properties which could be wrong were
DBPROPSET_ROWSET, DBPROP_BOOKMARKS and DBPROP_IRowsetLocate and have been
corrected.
================(Build #4076 - Engineering Case #298602)================
Fetching the second row of a result set in an updateable cursor, where one
of the columns had more than 200 bytes of data, would have resulted in an
error. This has now been fixed.
================(Build #4078 - Engineering Case #300109)================
Columns were always being reported as not nullable through the ASA provider.
This was most noticeable when generating datasets in Visual Studio .NET.
The AllowDBNull property would always be false, even if a column allowed
nulls. This has been corrected.
================(Build #4083 - Engineering Case #299664)================
Using the ASA oledb provider to access a database from Powerbuilder would
have failed as of versions 7.0.4.3382, 8.0.1.3062 and 8.0.2.4076. This was
due to a problem introduced as part of the fix for issue 270447. This problem
has now been corrected.
================(Build #4088 - Engineering Case #302381)================
It was possible that a client application, using the ASA provider, could
have overwritten memory when updating or inserting a record in an ADO recordset
or through the irowsetupdate or irowsetinsert interfaces. This has been fixed.
================(Build #4094 - Engineering Case #303608)================
This change fixes a number of provider problems when used with PowerBuilder.
Note that a PowerBuilder problem still exists which causes an error dialog
when creating catalog tables when connecting to a Database Profile from PowerBuilder
for the first time.
The problems fixed include:
1. IDBInitialize::Initialize no longer requires the Data Source property
(DBPROP_INIT_DATASOURCE). Before this fix, all connection properties were
ignored if there was no Data Source property and the OLE DB driver prompted
for connection information.
2. IOpenRowset::OpenRowset was not correctly using passed in properties
to determine cursor type & updatability. This also affects ITableDefinition::CreateTable
if ppRowset is not NULL.
3. IColumnInfo::GetColumnInfo now only includes bookmark column info if
bookmarks are enabled on the rowset.
4. IRowsetUpdate::Update now allows non-NULL pcRows and NULL prgRows. This
affects 8.0.0 and up only, since the 7.0.4 OLE DB provider did not give an
error in this case. (The Microsoft MSDN Documentation states that this is
an error, but I expect that Microsoft OLE DB drivers actually allow this).
================(Build #4099 - Engineering Case #304698)================
If the 8.0 ASAprov oledb provider was used to connect to a database running
on an ASA 7.0 server or the codepage for the database character set was not
installed, the connection will fail with the error:
"The system does not support conversion between Unicode and the requested
character set, substituting the current ansi codepage".
This is now treated as a warning instead of an error.
================(Build #4119 - Engineering Case #296415)================
Using client side cursors to fetch datetime values through the ASA oledb
provider (ASAProv), would have failed or returned #ERROR instead of the
actual value. This is now fixed, but a work around would be to use server
side cursors.
================(Build #4199 - Engineering Case #302936)================
It was possible that an application could have crashed, or failed to insert
a row, when calling the rowset update function (or the AddNew and Update
functions) in ADO. This has now been fixed.
================(Build #4206 - Engineering Case #306023)================
The ASAProv OLEDB provider could have failed when accessed by multiple threads
concurrently, with failures more likely to have occurred on multi processor
machines than single processor machines. One instance of this failure was
the error DB_E_BADTYPENAME(0x80040E30) from ICommandWithParameters::SetParameterInfo.
This is now fixed.
================(Build #4212 - Engineering Case #305107)================
When the provider copied column values to its data buffer, a column could
have been partially overwritten by a subsequent column, if an intervening
column was the empty string. This has been fixed.
================(Build #4213 - Engineering Case #308245)================
When function IRowsetLocate::GetRowAt was called to fetch result set rows
using bookmarks; for static and keyset cursors, ASAProv may have missed a
row due to an invalid offset for some bookmark values. This is now fixed.
================(Build #4216 - Engineering Case #306474)================
If a blob column was updated using the ASA provider,(eg. through ADO cursors
), it could have crashed or updated the data incorrectly. This has been fixed.
================(Build #4217 - Engineering Case #309628)================
When fetching data via ASAProv from a database which used a multibyte collation,
and charset conversion was requested, the returned length of the converted
string was not set correctly. This could have caused unintended behaviour
in ADO applications. This is now fixed.
================(Build #4219 - Engineering Case #310101)================
Trying to disable OLE DB Connection Pooling by adding "OLE DB Services =
-2" in the connection string of the ASA OLEDB provider, would have caused
a "R6025 Pure Virtual Function Call" error, when a second query was executed.
This has now been fixed.
================(Build #4223 - Engineering Case #310866)================
ASAProv was returning incorrect string values for database with a UTF8 collation.
The UFT8 string values were converted to the required string type(ie DBTYPE_STR,
DBTYPE_WSTR or DBTYPE_BSTR), but the null terminator was not being set. This
is now fixed by null terminating the string.
================(Build #4226 - Engineering Case #311234)================
When connected to a database with a UTF8 collation,ASAProv converts from
UTF8 to Unicode. This conversion may have used a wrong string length, which
would have caused the run-time heap to have become corrupted. Also, the string
buffer was not initialized before converting the UFT8 strings. These problem
lead to various failures using VS.Net, and have now been fixed.
================(Build #4230 - Engineering Case #312905)================
Fetching more that 200 characters, using the OLEDB provider, into a DBTYPE_BSTR
could have left garbage data following the valid column data. This has been
fixed.
================(Build #4231 - Engineering Case #299584)================
After a row is added to an oledb rowset (ADO recordset), its edit state is
set to DBPENDINGSTATUS_NEW (ADO adEditAdd). If column values are then set
for the row, the edit state should not change, but the ASA provider was changing
it to DBPENDINGSTATUS_CHANGED (ADO adEditInProgress). Now it will no longer
do this for newly added rows, so they will retain their DBPENDINGSTATUS_NEW
(aado adEditAdd) state until the row is committed to the database or the
changes are rolled back.
================(Build #4231 - Engineering Case #313309)================
Calling a stored procedure in Visual Basic .Net could have resulted in the
error "syntax error or access violation: near '?' " This is now fixed.
================(Build #4233 - Engineering Case #312906)================
The SQL generated by the ASA provider's ITableDefinition and IOpenRowset
methods was not being correctly quoted. These methods would have failed
if the table name was specified as owner.name or if any object name was a
keyword or not a valid identifier. For 7.0.4 only, if ITableDefinition or
IAlterTable methods generated SQL that was over 255 characters, these methods
could fail. This has been fixed.
================(Build #4233 - Engineering Case #313144)================
When using the OLEDB driver, (or ODBC), on Windows CE devices, Japanese characters
with object names (e.g. table names) having a second byte of \x7b, could
have caused access violations or application errors in client applications.
A workaround is to use double quotes to quote the object names. This problem
is now fixed.
================(Build #4233 - Engineering Case #313708)================
If the fetch of a row from a result set resulted in some sort of error (eg.
conversion error), this error was suppressed by the provider and the rowset
was marked as having no more rows. Now the error is properly propogated
back to the client.
================(Build #4234 - Engineering Case #311280)================
When querying the datatype of TIME, DATE, or TIMESTAMP columns of an ASA
database, the ASA provider returned different values than Microsoft's provider,
MSDASQL. This has now been corrected.
================(Build #4241 - Engineering Case #301502)================
When an OLE DB schema rowset is opened to retrieve the parameter information,
the ASA provider executes the stored procedure 'sa_oledb_procedure_parameters'
which takes four parameters 'inProcedureCatalog', 'inProcedureSchema', 'inProcedureName'
and 'inParameterName'. If the 'inProcedureSchema' parameter was not specified
by, (such as is done by Delphi applications), the default value of 'current
user' was used. If this was not the same as the user that created the procedure,
no records would have been returned. This is now fixed by changing the default
value of 'inProcedureSchema' to the empty string ('').
================(Build #4243 - Engineering Case #314977)================
The OLEDB provider had a number of problems when used it with a Borland Delphi
built application:
- the provider could have overrun memory when fetching 200 bytes or less
into a buffer smaller or the same size as the actual data, when fetching
binary data into DBTYPE_VARIANT or any data into DBTYPE_STR or DBTYPE_BYTES.
This could have caused the application to crash as well as other problems.
- the provider could have used an incorrect length when getting data from
parameters of type DBTYPE_VARIANT containing DBTYPE_ARRAY. This could have
caused the wrong length of data to be used and possibly the application to
crash.
- NULL binary input parameters may have resulted in E_OUTOFMEMORY
- pessimistic locking was used when optimistic locking should have been
used, resulting in reduced concurrency and possible application blocking.
These problems have now been fixed. In 7.0.4 and 8.0.2, the locking type
has not been changed since an 8.0.0 or higher engine is required and the
change can cause the engine to choose a different cursor type.
================(Build #4246 - Engineering Case #317126)================
When using an OLE DB RowsetViewer with an OLE DB Tables schema rowset ( or
other schema rowsets ), without setting the TABLE_SCHEMA restriction (or
other SCHEMA restrictions), the returned rowset only contained entries created
by the current user, the other entries are missing. This has now been fixed.
================(Build #4248 - Engineering Case #317604)================
When an ADO recordset which returned multiple resultsets was opened, fetching
the second resultset would have caused the application to crash. This problem,
which has now been fixed, was introduced by the change for QTS 310101.
================(Build #4269 - Engineering Case #320668)================
When an OleDbDataReader was open on a connection to a Japanese database (Default
collation=932JPN), string values were truncated when fetched. The length
of the returned strings was taken as the number of characters, when it should
be the number of bytes. This has now been corrected.
================(Build #4284 - Engineering Case #323019)================
When running a query with a prepared command, the ASA Provider created an
internal rowset which was never freed, causing a memory leak. This was fixed
by deleting the internal rowset when the owning command object is released.
================(Build #4294 - Engineering Case #318938)================
If an application using the OLEDB driver read a bitmap from a longbinary
column and wrote it to a file, the application would likely have crashed.
If the longbinary column had length N, then the driver copied N+1 bytes,
changing a byte in memory that it did not own. This has been fixed.
================(Build #4294 - Engineering Case #326758)================
If an ADO application attempted to fetch and display columns that contained
unsigned, tinyint, or bigint (64 bit) values, an error could have occurred
or an incorrect value could have been displayed.
These included the following types and sample values:
tinyint 255
bigint 9223372036854775807
unsigned bigint 18446744073709551615
unsigned int 4294967295
unsigned smallint 65535
A tinyint type was treated as a signed value when it is not. A bigint type
was treated as a 32-bit value when it is not. An unsigned bigint type was
treated as a signed, 32-bit value when it is not. An unsigned int type was
treated as a signed value when it is not.
An unsigned smallint type was treated as a signed value when it is not.
These datatypes are now handled correctly.
================(Build #4296 - Engineering Case #327019)================
When an application using the OLEDB driver was connected to a database with
the Turkish collation, (1254TRK), some calls to get metadata would have failed.
This problem has now been fixed.
================(Build #4296 - Engineering Case #327081)================
The error 'Count field incorrect' could have occurred when executing an UPDATE
statement command and setting a binary column to the value of a variable.
This is nowfixed.
================(Build #4304 - Engineering Case #323298)================
THe ASA Provider held locks and kept cursors open after they were closed,
if it was called from OLEDB.NET provider. The internal ODBC cursor is now
closed when the OLEDB rowset is closed.
================(Build #3609 - Engineering Case #303115)================
If a procedure had an OUT paramter of type binary, long binary, varchar or
longvarchar then attempting to execute the procedure using the JDBC-ODBC
bridge would not have worked. This problem has now been fixed.
================(Build #3609 - Engineering Case #303208)================
If a procedure had an INOUT parameter, then executing the procedure using
the JDBC-ODBC bridge would have resulted in an error.
================(Build #4076 - Engineering Case #299546)================
If executing a statement resulted in a long error message being returned
from the server, then retrieving that error message using the JDBC-ODBC bridge
would likely have crashed the application or resulted in very strange behaviour.
This problem has now been fixed.
================(Build #4085 - Engineering Case #300775)================
Attempting to apply an EBF on machines having only the Client portion (i.e.,
no database server) of Adaptive Server Anywhere installed would have failed.
This has been fixed.
================(Build #4088 - Engineering Case #302180)================
When using the JDBC-ODBC bridge, calls to PreparedStatement.getMetaData()
were returning null all the time. This is now fixed.
================(Build #4093 - Engineering Case #303297)================
If a parameter or column was fetched as numeric and then retrieved as bigdecimal,
the bridge fails to convert the value correctly. Ths problem has now been
fixed.
================(Build #4094 - Engineering Case #303656)================
The JDBC-ODBC bridge did not properly handle OUT parameters of type TIME,
DATE and TIMESTAMP. Sometimes a strange error message may have been thrown
when retrieving the OUT paramter, while at other times an unexpected result
would have been retrieved. This problem has been fixed.
================(Build #4094 - Engineering Case #303657)================
When using the JDBC-ODBC bridge, if a parameter was of type INT, SMALLINT,
TINYINT, BIGINT, BIT, REAL, DOUBLE or FLOAT and had the value NULL, then
calling getObject would have returned the value 0 instead of returning a
null object. This problem has now been
fixed.
================(Build #4097 - Engineering Case #304086)================
When using the JDBC-ODBC bridge, a race condition could have occurred when
calling the following methods, if JDBC objects created on the same connection
were used concurrently on different threads:
- IStatement.getColStrAttr
- IResultSetMetaData.getCatalogName
- IResultSetMetaData.getColumnLabel
- IResultSetMetaData.getColumnName
- IResultSetMetaData.getColumnTypeName
- IResultSetMetaData.getSchemaName
- IResultSetMetaData.getTableName
This could have resulted in wrong results or crashes. The problem is now
fixed
================(Build #4098 - Engineering Case #304495)================
When using DBISQL and the JDBC-ODBC bridge any attempts to execute a batch
with variying multiple result sets would have failed. For example:
begin
select 1,2;
select 3
end
This problem has now been fixed.
================(Build #4122 - Engineering Case #325034)================
Install on a device running Windows CE 4.1 would have failed. This was due
to the INF file used by CE Application manager only supported versions up
to 4.0. Version 4.1 did not exist when 8.0.2 originally shipped. This has
now been fixed.
There are two work around options:
1)
Install ASA 8.0.2 for Windows CE on the desktop machine, but choose _NOT_
to
deploy to the device.
Then install an EBF for ASA for Windows CE, build 4122 or later, which will
allow the install to the device toc work correctly.
2)
Edit <ASA_DIR>\ce\asa_ce.inf with a text editor
In each of the following sections, change "VersionMax" to something larger
than
4.1 ( for example, 5.0 )
---------------------------------------------------------------------
[CEDevice.MIPS.30] ; for MIPS processor
ProcessorType = 4000 ; processor value for MIPS R3900
VersionMin = 3.0
VersionMax = 4.0 <-- change this to 5.0
[CEDevice.ARM.30] ; for StrongARM processor
ProcessorType = %ArmProcessor% ; processor value for ARM
VersionMin = 3.0
VersionMax = 4.0 <-- change this to 5.0
[CEDevice.X86.30] ; for X86 processor
ProcessorType = x86 ;
VersionMin = 3.0
VersionMax = 4.0 <-- change this to 5.0
---------------------------------------------------------------------
================(Build #4200 - Engineering Case #304465)================
The system procedures sp_tsql_environment and sp_reset_tsql_environment were
setting options using option names that did not match the case in the SYSOPTION
system table. This caused problems with databases created with certain collations.
These procedures have now been changed to use option names with a case matching
those in the SYSOPTION table.
================(Build #4204 - Engineering Case #305860)================
If a result set had a column of type unsigned smallint, unsigned int or unsigned
bigint, and the value of the column was greater than the largest smallint,
int or bigint (respectively), then retrieving the column using the ASA JDBC-ODBC
bridge would have resulted in a "value out of range error". This problem
has now been fixed.
================(Build #4206 - Engineering Case #306157)================
If a user disconnected from Sybase Central, which had been connected to a
database server via the JDBC-ODBC bridge, and had selected from a table and
viewed its data, Sybase Central would have quietly exited. This problem has
now been fixed.
================(Build #4214 - Engineering Case #307606)================
When upgrading to a new minor release of Adaptive Server Aanywhere, after
having applied an EBF to the previous version where the EBF was newer than
the minor version being installed, the performance counters could have failed
to install correctly. During installation, the user may see an error message
such as "unable to register dbctrs8.dll: -4".
For example, if a user installed 8.0.1 GA followed by 8.0.1.3080 EBF, then
upgraded to 8.0.2 GA, the installation may fail because 8.0.1.3080 EBF is
newer than 8.0.2GA.
This is now fixed, but the problem may be worked around by deleting the
dbctrs8.ini and sqlactnm.h files in the win32 directory before installing
8.0.2 GA.
================(Build #4214 - Engineering Case #309074)================
When connected via the JDBC-ODBC bridge, if an output parameter was registered
as DECIMAL and then subsequently fetched using getBigDecimal, the bridge
would have thrown a "Rounding Necessary" error, instead of properly returning
the output parameter. This is now fixed.
================(Build #4217 - Engineering Case #309682)================
When starting the MobiLink Notifier in an environment where the default locale
language was either German or French, would have caused the exception "java.util.MissingResourceException".
This problem is now fixed. A workaround is to execute java.util.Locale.setDefault(
new Locale( "en" ) ) in a static initializer in the first class listed in
the StartClasses or by some other mean for changing the default locale for
the JVM instance (e.g. setting System.properties in java code or using the
-D switch. Please see, Sun's documentation)
================(Build #4223 - Engineering Case #310912)================
When using the JDBC?ODBC bridge, warnings were not always being reported
back to the application.
For example, using DBISQL connected via the JDBC-ODBC bridge, executing
the following:
set temporary option ansinull = 'on';
select count(NULL) from dummy;
will cause an warning dialog "null value eliminated in aggregate function".
Executing the following statement:
select top 10 * from rowgenerator;
would have return no. This problem has now been fixed.
================(Build #4239 - Engineering Case #314893)================
When using Java to write MobiLink synchronization logic, if the connection
to the MobiLink server used the JDBC-ODBC bridge, a fetched timestamp could
have had an incorrect value. In particular, the value would have been wrong
if the timestamp column in the database had non-zero milliseconds. This has
now been fixed.
================(Build #4250 - Engineering Case #318140)================
If the method ResultSet.getAsciiStream was used to get an AsciiStream of
a string column (char, varchar or long varchar), the AsciiStream returned
would not have given the proper ascii representation of the string. This
is now fixed.
================(Build #4251 - Engineering Case #318321)================
The JDBC-ODBC bridge was not handling conversions from float to integer correctly.
If setObject was used with an object type of float or double and a target
SQL type of integer, smallint or bigint, calling executeUpdate would sometimes
have incorrectly failed with a conversion error. This problem has now been
fixed. In the cases where the conversion error is valid (as in an overflow
or underflow case) the error is still returned as expected.
================(Build #4252 - Engineering Case #318486)================
If Connection.createStatement or Connection.prepareStatement was used without
specifying a particular result set type and concurrency, then the bridge
would have used a default result set type of scroll sensitive, instead of
forward only (as specified by the JDBC specification). The bridge now defaults
to forward only. Note that the concurrency was, and still is, read only (as
specified by the JDBC specification).
================(Build #4256 - Engineering Case #319724)================
When setting an unknown property at connection time, the JDBC-ODBC bridge
would have throw an exception. The bridge now ignores the property, which
is the behaviour of the Sun bridge.
================(Build #4268 - Engineering Case #315656)================
The shortcut to the sample database installed on Japanese Windows CE machines
was referencing asademoj.db, instead of asademo.db. This has now been corrected.
================(Build #4303 - Engineering Case #329001)================
An embedded SQL application may have crashed with SIGBUS on 64-bit UNIX platforms
or received the error "SQL ERROR (-758) SQLDA data type invalid". The SQL
preprocessor was generating the sqldabc field in the SQLDA structure incorrectly
for 64-bit platforms. This has now been fixed.
Workaround: use the flag -DLONG_IS_64BITS when compiling 64-bit embedded
SQL application.
================(Build #3602 - Engineering Case #299872)================
In some circumstances when a connection was established while the server
was shutting down the server could crash. Database files are not affected.
This affected UNIX servers only and has been fixed.
================(Build #3602 - Engineering Case #300128)================
The minimum cache size calculated by a server running on a Windows platform
would have been different from one running on a UNIX platform. This change
makes the behaviour consistent.
Assuming that no -cl parameter was specified when starting a server:
1) Prior to this change, if -c was specified, a Windows server would set
the min cache size to be the same value as the -c value (i.e. the same as
the initial value) whereas a UNIX server would make the min cache size 2MB.
This change makes all platforms use the Windows behaviour in this situation:
min cache size is the same as the initial cache size.
2) Prior to this change, if _no_ -c was specified, the initial cache size
is computed by the server to be an appropriate size based on the size of
the database(s) being started. A Windows server would use this initial cache
size value as the minimum cache size, whereas a UNIX server would make the
min cache size 2MB. This change makes all platforms set the minimum cache
size to a platform-specific minimum value as follows: CE: 600K, all other
Windows platforms: 2M, all UNIX platforms: 8M
================(Build #3602 - Engineering Case #300231)================
When multiple threads running Java code tried to disconnect at about the
same time, a thread deadlock was possible. This has been fixed.
================(Build #3603 - Engineering Case #300401)================
Running the dbexpand utility on Unix platforms, was taking a long time to
expand a compressed database file. This has now been greatly improved.
================(Build #3604 - Engineering Case #301929)================
In rare circumstances, the backup copy of a transaction log created by the
image-backup form of the BACKUP statement could have been corrupt. The bug
does not apply to "archive backups" created by the BACKUP statement nor does
it apply to the dbbackup command line utility. The problem which is now fixed,
was more likely to show up on SMP machines and could only happen if operations
were being performed while the log was being backed up.
================(Build #3605 - Engineering Case #301012)================
If a server was started on the same machine as another server of the same
name, which was was in the process of shutting down, the server could have
hung. Also, if a client attempted to connect via shared memory to the newly
started server, while the previous server with the same name was still shutting
down, the client could have hung. This problem has been fixed.
================(Build #3605 - Engineering Case #302812)================
In very rare situations, the server could have crashed while attempting to
shutdown in response to a dbstop command with a connection string or when
a client tried to disconnect normally. This is now fixed.
================(Build #4075 - Engineering Case #299117)================
A point of contention was present that could cause convoy phenomena to form.
These phenomena could be detected by seeing that the CPU usage on a multi-CPU
system dropped to about 1 CPU, while the number of active requests was high
and the 'Contention: Engine' performance counter or EngineContention engine
property was high.The convoys were most likely to form with requests that
accessed a table through an index, with all of the indexed pages in cache.
This point of contention has been removed.
================(Build #4076 - Engineering Case #296293)================
Under some rare conditions, queries with an ORDER BY clause, partially satisfied
by an index, may have returned an unsorted result set. The conditions under
which this may have happened were:
- The query contains an ORDER BY clause of the form col1, col2, ...., expr1,
expr2
- There exists an index IDX matching (col1, col2, ... ) (i.e., only a prefix
of the ORDER BY clause is matched by an index)
- There exists a WHERE equality predicate on at least one of the IDX columns
- The best plan chooses the index IDX as an access method
Example:
SELECT T.Y, R.Z
FROM T, R
WHERE T.X = 1 AND R.Z = T.Z
ORDER BY T.Y, R.Z
where the table T has an index IDX on (T.X, T.Y )
This is now fixed.
================(Build #4076 - Engineering Case #296935)================
Passing a subquery expression as a procedure argument could have crashed
the server. The subquery must have been part of another expression for the
crash to have occurred; simple subquery arguments result in a syntax error.
For example:
call myproc(coalesce(null,(select 1)))
A syntax error will now be generated for all subqueries used as any part
of a procedure argument. To pass a subquery to a procedure, assign the subquery
result to a variable and pass the variable to the procedure.
================(Build #4076 - Engineering Case #296999)================
When running with a database file that was created with Version 7.0 or before,
and had not been upgraded to 8.0, then the query optimizer could have used
selectivity estimates that were too low (as low as 0%) leading to inefficient
execution plans. In cases where the plan was particularly poor, this could
have appeared as an infinite loop. This is now fixed.
================(Build #4076 - Engineering Case #297566)================
A hash join with complex join conditions of the form:
f( g( R.x ) ) = T.x
could have caused the server to crash or give wrong results, if the value
g( R.x ) was used above the join.
For example, the following query caused the problem:
select right( left( R1.z, 10 ), 1 ), 1
from R R1, R R2
where left( left( R1.z, 10 ), 6 ) = R2.z
and (R1.x+1 > 0, 0 )
for read only
This is now fixed.
================(Build #4076 - Engineering Case #297863)================
It was possible to define multiple consolidated users on the same remote
database with a SQL command similar to :
GRANT CONSOLIDATED TO u1,u2 TYPE FILE ADDRESS 'u1','u2';
This command now returns an error.
================(Build #4076 - Engineering Case #298000)================
If a Java stored procedure executed a query which referenced a proxy table,
rows could have been omitted from the result set. A common appearance of
the problem would have exhibited as missing every other row of a result from
a proxy table. This is now fixed.
================(Build #4076 - Engineering Case #298285)================
In rare circumstances, indexes could be corrupted by internal server cursors.
Internal server cursors are used by the server during the execution of many
types of statement, usually to access system tables. This is now fixed.
================(Build #4076 - Engineering Case #298374)================
The server provides concurrency control for reading and updating the column
statistics so that any changes to the statistics can take place safely in
the presence of multiple simultaneous requests accessing the same statistics.
The concurrency control method used could have caused the server to slow
down unacceptably at very high rates of access (in one test the problem was
observed after the number of concurrent users exceeded approximately 1500).
The server now uses an improved concurrency control method and the problem
should no longer occur.
================(Build #4076 - Engineering Case #298410)================
The following changes have been made:
Added: property( 'LicensesInUse' )- determines the numbers of "seats" or
"concurrent users" currently connected to the network server. Each "seat"
or "concurrent user" is determined by the number of unique client network
addresses connected to the server, not the number of connections. For example,
if three client machines are connected to a server, and each client machine
has two connections, select property( 'LicensesInUse' ) is '3'.
Corrected: property('ProcessCPU') - now correctly returns the total process
time in seconds. It was incorrectly returning two times the property('ProcessCPUUser')
on Windows platforms.
Corrected: db_property('IdleCheck'),db_property('IdleChkpt') and statistics
"Idle Actives/sec" and "Idle Checkpoints/sec" - now return correct values
(before they always had values of 0)
================(Build #4076 - Engineering Case #298511)================
Queries with views or derived tables in the null-supplying side of an outer
join (left outer, right outer or full outer joins) may have returned an incorrect
result set under these conditions:
- there was a local predicate referencing only the view in the WHERE clause
of the main query block and this predicate was not null-rejecting (for example,
"View.column is null").
- the view or derived table could not have been flattened (e.g., it is a
grouped view, or it contains an UNION)
- the OUTER join could not have been transformed into an inner join
Example:
SELECT * FROM employee a
LEFT OUTER JOIN
(
SELECT * FROM employee
UNION
SELECT * FROM employee
) b
ON (a.emp_id = b.emp_id)
WHERE b.emp_id IS NULL
This is now fixed.
================(Build #4076 - Engineering Case #298612)================
If a query contained a predicate that trivially evaluates to FALSE (e.g.,
c1 BETWEEN 5 and 10), the optimizer would have realized that the predicate
would have caused the result set to be empty. However, in some cases (e.g.,
if a GroupByHash appeared in the plan above the filter), the server could
still have caused memory to be allocated. Depending upon the estimated size
of the result set without the predicate that evaluates to FALSE, enough memory
could have been allocated that could result in the server running out of
available memory. This problem has now been fixed.
================(Build #4076 - Engineering Case #299060)================
ASA now supports slightly larger standard cache sizes by allowing the cache
to be allocated in up to 4 contiguous pieces where only two were allowed
previously. Some DLLs load at addresses which fragment the address space
and limit the amount of memory that can be allocated with only two pieces.
With this change, perhaps 100MB to 200MB of additional memory may be allocated
for a standard cache, depending on the system. For AWE caches, this change
means that more address space will now be available for mapping to physical
memory which will reduce the number of mapping operations which must be performed.
Also with this change, the minimum AWE cache size has changed from (3GB-256MB)
to 2MB. It is recommended, however, that a traditional cache be used if a
traditional cache of the desired size can be allocated. Recall that AWE caches
do not support dynamic cache sizing and that AWE cache page images are locked
in memory and therefore they cannot be swapped out if memory is needed by
other processes in the system.
================(Build #4076 - Engineering Case #299066)================
If an UPDATE statement used an index on a column defined with DEFAULT TIMESTAMP,
it could have looped infinitely. This is now fixed.
================(Build #4076 - Engineering Case #299106)================
A point of contention existed which could have reduced concurrency on SMP
machines, when using work tables (tables used for operations such as sort,
group-by, and distinct). This situation could be recoqnized by observing
CPU use on a mult-processor system at about 1 CPU, while the number of active
requests was high, the HashContention property was high ('Contention: Hash
Chains' performance counter), and many of the active requests were using
work tables. The point of contention has been removed.
================(Build #4076 - Engineering Case #299110)================
In some circumstances, fetching backward on an index scan could have caused
the index scan to fetch all rows prior to the current position, back to the
beginning of the table. Even without backward fetches from the client, this
situation could have arisen due to prefetching.
This situation can be detected by higher-than expected running times combined
with a larger than expected CacheReadIndLeaf property (the 'Cache Reads:
Index Leaf' performance counter).
================(Build #4076 - Engineering Case #299119)================
Queries having DISTINCT and ORDER BY may have returned the incorrect order
if the ORDER BY clause was satisfied by an index and DISTINCT was executed
as a DistinctHash.
Example:
select DISTINCT T.y
from T
where T.x > 100
order by T.x
If an index scan was used for the table T (the index on T.x) and DistinctHash
was used, the result may not have been sorted. This is now fixed.
================(Build #4076 - Engineering Case #299361)================
During checkpoints, more writes might have been performed than necessary
to write out table bitmaps. Unnecessary writes are no longer done.
================(Build #4076 - Engineering Case #299368)================
If the server was running in bulk mode (see the "-b" command line option)
and a transaction caused a rollback, then in some rare situations it was
possible for the database to become inconsistent as far as the catalog is
concerned. This problem has now been corrected. As a consequence of this
fix, the server will now employ the rollback log during the bulk mode and
not use the transaction log as before.
================(Build #4076 - Engineering Case #299466)================
The network server now supports the LocalOnly TCP option (eg. -x tcpip(LocalOnly=YES),
which restricts connections to the local machine. Connection attempts from
remote machines will not find this server (regardless of connection parameters),
and dblocate running on remote machines will not see this server. This parameter
effectively
turns the netwark server into a personal server, without the personal server
limitations(i.e. no connection limit, no two CPU limit, etc.)
================(Build #4076 - Engineering Case #299721)================
The VALIDATE TABLE statement, or the dbvalid utility, could have spuriously
reported missing index entries. When checking a foreign combined index, (ie
5.x format), with the FULL CHECK option, entries containing some, but not
all, nulls would have generated a spurious report of a missing index entry.
This is now fixed.
================(Build #4076 - Engineering Case #299936)================
A new property, QueryCachedPlans, has been added which shows how many query
execution plans are currently cached. This property can be retrieved using
the CONNECTION_PROPERTY function to show how many query execution plans are
cached for a given connection, or DB_PROPERTY can be used to count the number
of cached execution plans across all connections.
This property can be used in combination with QueryCachePages, QueryOptimized,
QueryBypassed, and QueryReused to help determine the best setting for the
MAX_PLANS_CACHED option.
================(Build #4076 - Engineering Case #300402)================
Executing the system procedure, xp_startmail, would have returned either
0 (success) or 2 (failure), which maked diagnosing xp_starmail failures difficult.
Now xp_startmail, (as well as xp_sendmail and xp_stopmail), returns one
of
the following:
Value Description
0 Success
2 xp_startmail failed
3 xp_stopmail failed
5 xp_sendmail failed
11 Ambiguous recipients
12 Attachment not found
13 Disk full
14 Failure
15 Invalid session
16 Text too large
17 Too many files
18 Too many recipients
19 Unknown recipient
20 Login failure
21 Too many sessions
22 User abort
23 No MAPI
24 xp_startmail not called (xp_sendmail and xp_stopmail only)
================(Build #4077 - Engineering Case #299947)================
Expressions of the form '0-(expr)' could have given the wrong answer, namely
'(expr)', where (expr) is any expression.
For example:
select 0-1; ---> gives 1 incorrectly
select 0.0-1.0; ---> gives 1 incorrectly
Now the correct answer is returned.
================(Build #4078 - Engineering Case #293788)================
Calling xp_sendmail functions simultaneously, from different connections,
could have caused unexpected behaviour, up to and including a server crash.
This has been fixed, but in order to take advantage of it, several system
procedures need to change. Databases created, or rebuilt, after this change
will have these new procedures, but for existing databases, the following
script will make the necessary changes:
-- START OF SCRIPT
insert into dbo.EXCLUDEOBJECT values ( 'xp_real_stopmail' , 'P' )
go
insert into dbo.EXCLUDEOBJECT values ( 'xp_real_startmail' , 'P' )
go
insert into dbo.EXCLUDEOBJECT values ( 'xp_real_sendmail' , 'P' )
go
CREATE function dbo.xp_real_startmail(
in mail_user char(254) default null,
in mail_password char(254) default null,
in connection_id int )
returns int
external name 'xp_startmail@dbextf.dll'
go
ALTER function dbo.xp_startmail(
in mail_user char(254) default null,
in mail_password char(254) default null )
returns int
begin
declare connection_id int;
select connection_property( 'Number' ) into connection_id from dummy;
return( xp_real_startmail( mail_user, mail_password, connection_id ) )
end
go
CREATE function dbo.xp_real_stopmail( in connection_id int )
returns int
external name 'xp_stopmail@dbextf.dll'
go
ALTER function dbo.xp_stopmail()
returns int
begin
declare connection_id int;
select connection_property( 'Number' ) into connection_id from dummy;
return( xp_real_stopmail( connection_id ) )
end
go
CREATE function dbo.xp_real_sendmail(
in recipient char(254),
in subject char(254) default null,
in cc_recipient char(254) default null,
in bcc_recipient char(254) default null,
in query char(254) default null,
in "message" char(254) default null,
in attachname char(254) default null,
in attach_result int default 0,
in echo_error int default 1,
in include_file char(254) default null,
in no_column_header int default 0,
in no_output int default 0,
in width int default 80,
in separator char(1) default '\t',
in dbuser char(254) default 'guest',
in dbname char(254) default 'master',
in type char(254) default null,
in include_query int default 0,
in connection_id int )
returns int
external name 'xp_sendmail@dbextf'
go
ALTER function dbo.xp_sendmail(
in recipient char(254),
in subject char(254) default null,
in cc_recipient char(254) default null,
in bcc_recipient char(254) default null,
in query char(254) default null,
in "message" char(254) default null,
in attachname char(254) default null,
in attach_result int default 0,
in echo_error int default 1,
in include_file char(254) default null,
in no_column_header int default 0,
in no_output int default 0,
in width int default 80,
in separator char(1) default '\t',
in dbuser char(254) default 'guest',
in dbname char(254) default 'master',
in type char(254) default null,
in include_query int default 0 )
returns int
begin
declare connection_id int;
select connection_property( 'Number' ) into connection_id from dummy;
return( xp_real_sendmail( recipient, subject, cc_recipient, bcc_recipient,
query, "message", attachname,
attach_result, echo_error, include_file, no_column_header,
no_output, width, separator, dbuser, dbname, type,
include_query, connection_id ) );
end
-- END OF SCRIPT
================(Build #4078 - Engineering Case #300139)================
Under Windows 95, there was no response to a right-click on the server icon
in the system tray. This has now been fixed.
================(Build #4079 - Engineering Case #300073)================
When using an external procedure or function written in Delphi, the server
could have crashed with a floating point exception. The DLL would have enabled
a number of floating point exceptions that are masked by default. While the
proper fix would be to rewrite the external procedure or function, the server
will now reset/restore the floating point control word at appropriate points.
Code, such as the following, can be used to explicitly set the floating
point control word:
const
MCW_EM = DWord($133f);
begin
Set8087CW(MCW_EM);
end;
================(Build #4079 - Engineering Case #300192)================
When running the server on NetWare 4.x, it required that TCPIP.NLM be already
loaded on the NetWare server. This requirement has been removed, it is now
loaded dynamically.
================(Build #4079 - Engineering Case #300203)================
Using Syntax 2 of the FORWARD TO statement could have caused the local ASA
server to crash. An example is as follows:
forward to SOME_REMOTE_SERVER;
select * from systable; ===> CRASH
Whereas the following:
forward to SOME_REMOTE_SERVER {select * from systable}
was not a problem. This has now been fixed.
================(Build #4079 - Engineering Case #301567)================
The event_parameter function can now be used to determine the name of the
schedule which caused an event to be fired. The value of:
event_parameter('ScheduleName')
will be the name of the schedule which fired the event. If the event was
fired manually using TRIGGER EVENT or as a system event, the result will
be an empty string. If the schedule was not assigned a name explicitly when
it was created, its name will be the name of the event.
================(Build #4080 - Engineering Case #297828)================
A point of contention existed which could have reduced concurrency on SMP
machines, when using outer joins that supplied NULL rows. This situation
could be recoqnized by observing CPU use on a mult-processor system at about
1 CPU, while the number of active requests was high, and many of the active
requests were using outer joins that supply NULL rows. The point of contention
has now been removed.
================(Build #4080 - Engineering Case #299102)================
The table sys.dummy was previously implemented as a real table. As the single
row was fetched from this table, the corresponding table page was temporarily
latched while reading the column out (as is done for all base tables). Since
the dummy table contains only one row, there was a strong possibility that
multiple clients would be concurrently attempting to latch the same page.
This could lead to reduced opportunities for parallelism, and in some cases
could lead to the formation of convoys. These symptoms could be observed
by noting that the CPU use on a multi-processor system dropped to about one
processor, while many requests were active or unscheduled; at the same time,
many of the active requests referenced the sys.dummy table.
To address this issue, the sys.dummy table is now implemented in a manner
that bypasses the latch on the table page. This is only possible because
the dummy table has known contents which can not be modified. Further, the
sys.dummy table no longer appears in Lock nodes in the graphical plan, and
if only the sys.dummy table is present, a Lock node may now be omitted where
it was previously required.
================(Build #4080 - Engineering Case #299109)================
Search conditions of the form "col LIKE '<prefix>%' " can usually use an
index (ie sargable), if the prefix does not contain any wild card characters.
For these conditions, the engine infers range bounds of the form "col >=
'<prefix>' AND col < '<prefix#>'", where <prefix#> indicates a version of
the prefix string that has been incremented. For example, if the following
appears in a query:
emp_lname like 'SI%'
then the optimizer will add additional predicates:
emp_lname >= 'SI' AND emp_lname < 'SJ'
Note that the last character, 'I', has been incremented to the next character
in the collation; in this case, 'J'.
For prefix strings that consisted only of the last character of the collation
sequence (for example, 'Z'), the server did not form the incremented prefix,
did not form the upper bound for the column and the lower bound was not included.
Because of this, an index would not be used for the column. Now, for conditions
of the form:
emp_lname like 'ZZ%'
the lower bound will be added by the optimizer as follows:
emp_lname >= 'ZZ'.
An index will be selected for this column, if it would have been used for
the added range predicate.
Note that conditions such as "emp_lname LIKE 'SZ%'" are treated by adding:
emp_lname >= 'SZ' AND emp_lname < 'T'
and can use an index before and after this change.
================(Build #4080 - Engineering Case #299120)================
Simple queries such as the following:
SELECT *
FROM T
WHERE T.x = ?
would not have used an index on T.x, if the value of the host variable was
NULL. In order for this issue to appear, the query must have been sufficiently
simple to bypass optimization. For example, it must have contained only one
table with no aggregates nor disjuncts.
The predicate T.x = ? will match no rows if the host variable is NULL. While
no rows were returned before this change, a complete sequential scan of the
table was performed. This scan is not longer done. Further, the graphical
plan would not have shown the sequential scan in this case, as the graphical
plan does not use the optimizer bypass.
================(Build #4080 - Engineering Case #300483)================
Under some circumstances, the statistics gathering in the server during query
execution could have caused a divide by zero floating point exeception. Usually,
these exceptions do not cause any significant problems. The problem has been
fixed.
================(Build #4082 - Engineering Case #300796)================
The server automatically generates histograms on columns being loaded via
the LOAD TABLE statement. The size of a histogram that was being generated
during LOAD TABLE was unnecessarily large. Now LOAD TABLE will generate histograms
with the same sizes as the equivalent histograms created by the CREATE STATISTICS
statement.
================(Build #4082 - Engineering Case #300847)================
If the path to a database file contained brace characters (eg.
"E:\test\{test}\sample.db"), applications using dblib or odbc could not
have autostarted it. This has been fixed. Note that starting a server directly
on the file (eg. dbeng8 E:\test\{test}\sample.db) was not a problem.
================(Build #4082 - Engineering Case #301414)================
The START DATABASE statement now allows a database to be started in read-only
mode or with log truncation on checkpoint enabled.
New syntax:
STARTÿDATABASEÿdatabase-file
[ÿASÿdatabase-nameÿ]
[ÿONÿengine-nameÿ]
[ÿAUTOSTOPÿ{ÿONÿ|ÿOFFÿ}ÿ]
[ WITH TRUNCATE AT CHECKPOINT ]
[ FOR READ ONLY ]
[ÿKEYÿkeyÿ]
================(Build #4083 - Engineering Case #300584)================
An assertion failure could have been generated when using an aggregate function
in a GROUP BY clause. For example, the following query could have caused
the assertion failure:
102300 (8.0.1.3067) File associated with given page id is invalid or
not open
select DT.TID
from sys.syscolumn C,
( select T.table_name, max( table_id ) TID
from sys.systable T
group by T.table_name
) DT
where C.column_name = DT.table_name
group by DT.TID
In order for the problem to have occurred, an intervening work table must
have appeared between the first GROUP BY clause and the second. THis has
now been fixed.
================(Build #4083 - Engineering Case #301242)================
Using 'LOAD TABLE' to load data into a table could have caused some data
to 'disappear'. For example, after loading 5000 rows of data into an empty
table the statement 'select count(*) from table' could return a value less
than 5000. Also, selecting all of the rows could return some subset of the
rows that were loaded. All of the following would need to be true to experience
this problem:
1) The table into which the data was loaded was empty
2) The table into which the data was loaded contained more than one database
page
3) The resulting table after the 'LOAD TABLE' statement contained at least
100 database pages
Databases experiencing this problem should be rebuilt. Unload the database
using DBUNLOAD and make sure to unload the data 'ordered'. CAUTION: If
you do an 'unordered' unload of the database the missing rows will be permanently
lost.
================(Build #4083 - Engineering Case #301342)================
Validating the database may have changed the rowcount (column "count") value
in SYSTABLE. The first validation would have given the error 'Run time SQL
error -- row count in SYSTABLE incorrect'. Validating the database a second
time would have returned no error. The rowcount value in SYSTABLE is now
no longer updated when validating the database.
================(Build #4084 - Engineering Case #301419)================
An INSERT WITH AUTO NAME embedded within a stored procedure would have failed,
usually with SQLCODE -207.
For example:
CREATE TABLE T (
PKEY INTEGER NOT NULL DEFAULT AUTOINCREMENT,
DATA1 INTEGER NOT NULL,
PRIMARY KEY ( PKEY ) );
CREATE PROCEDURE P()
BEGIN
INSERT T WITH AUTO NAME SELECT 1 AS DATA1;
END;
CALL P(); -- ASA Error -207: Wrong number of values for INSERT
Using WITH AUTO NAME on an INSERT statement on its own (outside of a procedure
context) would work have worked correctly. This is now fixed.
================(Build #4085 - Engineering Case #301110)================
A stored procedure cursor declared using the form:
DECLARE CURSOR c USING variable-name
could have caused an error or returned incorrect results. The problem would
have occurred after the procedure had been called more than 11 times. While
this is now fixed, a workaround is to set the value of the Max_plans_cached
option to 0.
================(Build #4085 - Engineering Case #301575)================
When there were approximately one hundred idle connections with liveness
enabled, liveness packets would have been sent out in large bursts. These
bursts could potentially have caused performance problems or dropped connections.
This fix attempts to avoid sending liveness packets in large bursts.
================(Build #4085 - Engineering Case #301603)================
The sa_validate stored procedure would return errors containing garbage values.
This has been fixed.
================(Build #4086 - Engineering Case #297833)================
If a computed column referenced another column in the table, then the computed
column could havw been updated incorrectly to NULL or based on an old value
of the referenced column if the update or insert statement had a subquery
with an outer reference to a column of the table being updated.
For example, the following update would have caused the issue:
create table compute_test (
id char(10),
name char(40),
compute_name char(40) compute(name)
);
insert into compute_test(id,name)
select 'dba', 'test2'
from compute_test T
where 0 = ( select count(*) from sys.dummy where T.id=current publisher
);
In addition to having the subquery appear explicitly in the update or insert
statement, the problem could also have occurred if a publication referenced
a column of the table.
This has now been fixed.
================(Build #4086 - Engineering Case #298372)================
The server would have crashed on shutdown, when running on Windows CE .NET.
This is now fixed.
================(Build #4086 - Engineering Case #301923)================
If a personal server was running, and a network server was also running on
another machine with the same name, and there is an entry in the asasrv.ini
file for the remote server, dbisql would have connected to the remote server
even without having specifying an engine name or links. This could have happened
on any dblib / odbc connection that specified LocalOnly=YES. This has been
fixed.
================(Build #4087 - Engineering Case #299297)================
The database server could have reported a fatal error or assertion failure
due to a failed IO operation. The error message displayed may have been "Fatal
error: Unknown device error", "Fatal error: no error", "A write failed with
error code: (1453)", "A write failed with error code: (1450)", or some other
form of fatal error or assertion failure message.
Certain errors such as 1450 and 1453 can (apparently) be reported by NT
during "normal" operation. The server now retries IOs that fail with one
of these particular error codes up to 100 times. If the IO does not succeed
after 100 attempts, the server will now report the error as a fatal error
(same message as before). It is impossible to know whether 100 attempts will
give the OS sufficient time to remedy whatever problems it is having.
It is believed that the OS reports these errors when available memory is
low. A work around may be to ensure that the system is not low on memory.
================(Build #4087 - Engineering Case #299615)================
The following sequence of statements would have resulted in a server crash:
create table table1 (col1 int, col2 int, col3 int );
create view v1 as
select col1, col2, col3 from table1;
alter view v1 (col1) as
select col1, col2, col3 from table1;
The ALTER VIEW statement included a column list for the view which did not
match the SELECT list. An error is now reported.
================(Build #4087 - Engineering Case #301968)================
A query with a merge join, meeting the following conditions, may have returned
too few rows:
1 - The join condition was an equality predicate, (call this predicate P1).
2 - There was a non-equality predicate referring to columns from both tables,
(call this predicate P2).
3 - At least one row from the left hand table, joined to multiple rows from
the right hand table.
4 - P2 rejected at least one row that would be in the result, if P1 was
the only predicate in the query.
4) Merge join is chosen by the optimizer.
This has been corrected.
================(Build #4088 - Engineering Case #299223)================
A valid query containing a CASE expression in both the SELECT list and the
GROUP BY clause, may have failed with SQLCODE -149. For this to have occurred,
the CASE expression must omit the final ELSE clause. A simple reproducible
is
SELECT CASE dummy_col WHEN 0 THEN 'Q1' WHEN 1 THEN 'Q2' END from dummy
GROUP BY CASE dummy_col WHEN 0 THEN 'Q1' WHEN 1 THEN 'Q2' END;
This has been corrected.
================(Build #4088 - Engineering Case #302305)================
Performance improvements were done for several internal functions used by
the optimizer. These changes may reduce the OPEN time for very complex queries.
================(Build #4088 - Engineering Case #302314)================
For statements used in the procedures, plans may be cached after a training
period (i.e., after a procedure has been called a few times). For statements
for which it is anticipated that the plans will not change too often, the
training period is now shorter, hence, the plans are saved much sooner than
before. Statements in this category include UPDATE/DELETE/SELECT statements
on one table with a WHERE equality predicate on the primary key.
================(Build #4088 - Engineering Case #302330)================
Selecting property('TempDir') when connected to a NetWare server would have
returned a random string on 8.0.1, or "SYS:" on 8.0.2 and up. On NetWare,
temp files for a database are located in the same directory as the database,
so the value of this property is meaningless. It has now been changed to
return an empty string.
================(Build #4088 - Engineering Case #302410)================
Batch statements executed outside procedures, triggered the deletion of all
cached plans. For example, when the following script was executed, the plans
saved during the execution of the procedures 'proc1' and 'proc2' were all
deleted after the message statement, (the MESSAGE statement is considered
a batch). Now, only batch statements that change the plan cache, triggers
the deletion of cache. In the example below, the cache plan is left unchanged
after the message statement.
EXEC proc1;
EXEC proc2;
Message 'some message';
================(Build #4090 - Engineering Case #302457)================
If a query contained an IN predicate on a table which had an ORDER BY clause
on another column, the rows could have been returned in an order that did
not match the ORDER BY clause. For the rows to be mis-ordered, the following
conditions must hold:
- An index is selected on the table to satisfy the IN predicate
- The IN predicate appears in the index before the column in the ORDER BY
For example, the following query could exhibit the problem if an index on
T(a,b) was selected:
SELECT * FROM T WHERE T.a IN (1,2) AND T.b>3 ORDER BY T.b
This is now fixed.
================(Build #4090 - Engineering Case #302635)================
It was possible for an INSERT statement, using the ON EXISTING UPDATE clause,
to silently fail if the row it was attempting to modify was locked. This
is now fixed.
================(Build #4091 - Engineering Case #293306)================
The server will no longer attempt to update statistics during recovery and
when executing "simple" DELETE and UPDATE statements. Simple statements are
those that are not optimized and are executed directly by the server.
================(Build #4092 - Engineering Case #298906)================
Calling a Java in the Database object which created a zip file using 'java.util.zip.ZipOutputStream',
with either JDK 1.1.8 or 1.3, would have reported a bad CRC error. This has
now been fixed.
================(Build #4092 - Engineering Case #302216)================
The error message "The system DLL kernel32.dll was relocated in memory. The
application will not run properly. The relocation occurred because the DLL
c:\program files\sybase\SQL Anywhere 8\win32\dbserv8.dll occupied an address
range reserved for windows NT system DLLs. The vendor supplying the DLL should
be contacted for a new DLL.", could have been displayed when attempting to
start the server. This has been fixed.
This error message only seemed to occur after applying a specific Microsoft
Hotfix (Q299444).
================(Build #4092 - Engineering Case #302905)================
The server would have crashed if a null expression was passed into a string
parameter when calling an internal stored procedure. This is now fixed.
================(Build #4092 - Engineering Case #303048)================
A FOR statement in a stored procedure would have created string variables
for date, time or timestamp expressions in the SELECT list of the FOR query,
if the Return_date_time_as_string option was ON. Now, the variables are created
with the same data types as the expressions.
================(Build #4093 - Engineering Case #303294)================
The server collects statistics on predicate selectivities during query execution
and updates the existing column statistics accordingly. The server was missing
some opportunities of lowering incorrect statistics when doing multi-column
index scans. This problem has now been fixed.
================(Build #4093 - Engineering Case #303389)================
When looking up the selectivity estimates for a predicate on values currently
out of range of the column histogram, the server could have used an inappropriate
values, such as 0%. Now, the server handles these cases of selectivity look-ups
and subsequent updates more appropriately.
================(Build #4093 - Engineering Case #303457)================
If a procedure, trigger, view or event handler contained a right brace character,
the original source format for the object was not saved. The object would
still have been created correctly and would function properly, but its appearance
would be altered when viewed from Sybase Central. This has been fixed. A
workaround is to remove the right brace character, if it appears only in
a comment, or replace it with \x7d or CHAR(0x7d) if it appears within a string.
================(Build #4094 - Engineering Case #303563)================
If the SUBSTR function was used to obtain a substring of a LONG VARCHAR column
and the third argument was not provided, a value of 32767 was used. The function
worked correctly if the first argument was a variable. Now a value of 2147483647
is used. For expressions such as variables, where the string length is known
when the expression is built, the actual string length will continue to be
used as the third argument.
================(Build #4094 - Engineering Case #303602)================
When attempting a BACKUP DATABASE operation on Windows CE, the process would
have failed with the error message:
IO Error 5 doing 'fstat' on file '<backup file name>'.
Ensure that all PC cards are inserted and that storage space is available.
Retry/Cancel.
This has been corrected.
================(Build #4095 - Engineering Case #298849)================
The server now allows one extra connection above the connection limit, to
allow a user with dba privileges to connect and drop other connections, in
case of an intentional or accidental denial-of-service attack. Note that
"connection limit" here refers to either the hard-coded 10-connection limit
of the personal server, or the value specified by
the -gm switch, and has nothing to do with licensing.
================(Build #4095 - Engineering Case #304034)================
Inserting into a proxy table would have caused a server crash, if the INSERT
statement used the ON EXISTING UPDATE clause. The ON EXISTING UPDATE clause
is not supported when inserting into proxy tables and will now cause a syntax
error.
================(Build #4096 - Engineering Case #304151)================
Executing an ALTER EVENT statement, to delete an event handler, was not setting
the "source" column in the SYSEVENT table to NULL. This is now fixed.
================(Build #4098 - Engineering Case #303370)================
An assertion failure such as:
102300: File associated with given page id is invalid or not open.
could have occurred when processing a query that skipped a string column
that appeared on a continuation page. A continuation page is used for a
row when a row does not fit entirely on one page. For example, this can occur
if a column is added to an existing table.
================(Build #4098 - Engineering Case #304250)================
The STUFF function did not work properly on strings containing multibyte
characters. It now handles MBCS strings correctly.
================(Build #4098 - Engineering Case #304604)================
Queries containing FULL OUTER JOINs and at least five (5) quantifiers may
have suffered from a poor access plan. In particular, in such cases the ASA
optimizer would only have considered nested-loop full outer join (JNLFO)
for each FULL OUTER JOIN in the query.
This problem has been corrected.
================(Build #4099 - Engineering Case #300425)================
In order to make cost based decisions on which indexes to use, the optimizer
needs to know some physical properties or statistics of the various candidate
indexes. These statistics include the number of leaf pages in and the depth
of an index. The server was obtaining the approximate index statistics by
performing run time sampling of indexes which could have been an expensive
operation, especially when optimizing small queries with a large number of
potential indexes to choose from. The server will now maintain the required
statistics as each index is updated. Not only will the statistics now be
available to the optimizer at virtually no cost, the statistics will also
be accurate.
The new statistics will persist in SYSATTRIBUTE in the form of one row for
each statistics for an index. The rows in SYSATTRIBUTE will only be created
when required, e.g., a row for index depth will be created only when the
index depth increases to 2. The statistics will be maintained for all indexes
including those on catalog tables. Also, the VALIDATE statement will now
verify that the statistics on the specified index(es) are accurate and will
generate an error otherwise.
================(Build #4108 - Engineering Case #303576)================
The MobiLink server memory usage would have increased over time as synchronizations
occurred, even if the synchronization did no processing. This has been corrected.
================(Build #4108 - Engineering Case #305678)================
A request to a remote server, that is taking a long time, can now be cancelled
provided the remote class is ODBC based. If the cancellation is successful,
control will be returned to the client with an appropriate error message;
if on the other hand the request can not be cancelled, then the engine will
continue to wait until the request completes.
================(Build #4111 - Engineering Case #306021)================
An isolation level 3 scan could have returned null values unexpectedly. This
is now fixed.
================(Build #4114 - Engineering Case #306125)================
With ANSI_INTEGER_OVERFLOW set to 'on', select 470488670*16 would have returned
-1062115872 on Unix platforms. This has been corrected so that the behaviour
is to return an overflow error.
================(Build #4118 - Engineering Case #303401)================
An server crash could have occurred if the following conditions were true:
- optimization logging was enabled
- the option LOG_DETAILED_PLANS was set to ON
- the query being logged was a complex statement
- the optimization strategy for the query was *not* being logged through
the use of one of the PLAN() functions.
This is now fixed.
================(Build #4119 - Engineering Case #305412)================
If a blob column was updated using oledb (eg. through ADO cursors ), the
ASA provider could have crashed. This is now fixed.
================(Build #4120 - Engineering Case #306963)================
A query in which an ORDER BY DESC was satisfied with a trie-based index could
have returned no rows, instead of the intended results. This could also
have happened with an ORDER BY ASC, if it was satisfied with a descending
index. This has now been fixed.
================(Build #4122 - Engineering Case #307157)================
If a join predicate is of the form "L.X = R.X" where L.X is a unique column,
and R.X is not a foreign key column, then the estimated selectivity for the
predicate "L.X = R.X" is now computed based on the number of distinct values
of the column R.X. For this estimates to be close to the real selectivity,
R.X must have an up-to-date histogram or an index on <R.X> must exist. Note,
for an index to be useful, the database must have been created with an 8.0
2 or later server.
================(Build #4123 - Engineering Case #306536)================
If a query contained a grouped subquery in the HAVING clause, and the grouped
subquery referenced an aggregate function from the main query, then an incorrect
result set may have been returned. This is now fixed.
The example below illustrates this issue: the subquery is a grouped query
and the aggregate function "min(p.quantity)" is an aggregation that must
be computed in the main query block:
select id
from product p
where quantity < 28
group by id
having ( select COUNT(*)
from product p1
where 28 >= min(p.quantity) ) > 0
================(Build #4123 - Engineering Case #307143)================
If a view or derived table was defined with a constant column and the constant
column was equated with another constant in a query, the server may have
crashed. This has been fixed. The example below illustrates this case, the
column X is defined to be 0 in the derived table DT definition and it is
equated with the constant 2 in the predicate "DT.X = 2":
create table #temp1 (X int);
insert into #temp1
select * from
( select X=0 from
product p1, product p2
where p1.id = p2.quantity ) AS DT(X)
where DT.X = 2
;
================(Build #4126 - Engineering Case #310658)================
If a database server was started as an NT service, using the LocalSystem
account, non-administrative users would still have been able to kill the
database server process using the task manager or a command-line based kill
program. With this fix, non-administrative users no longer have the ability
to kill the database server process.
================(Build #4126 - Engineering Case #310709)================
Queries containing ANY or ALL subqueries may have returned errors (such as
SQLE_CANNOT_OPTIMZE_QUERY). if the ANY/ALL subquery contained aliases on
subselects.
For example:
select (select count(*) from R) as A
from T
where 1 <> ALL (select A from S )
This has now been corrected.
================(Build #4127 - Engineering Case #311274)================
When using database with a Japanese collation, characters with a second byte
of \x7d, would have caused a syntax error (ASA error -131) if they appearred
in object names in CREATE PROCEDURE/FUNCTION/TRIGGER statements. This problem
has been fixed.
================(Build #4200 - Engineering Case #304918)================
Attempting to call a Java method immediately after upgrading a database to
add Java support, caused a misleading error message "not a public Java class."
The documentation of the ALTER command states "If you add Java in the database,
you must restart the database before it can be used." BUT this could be missed
while working through the "Invoice sample." The message has been changed
to now say "The database needs to be restarted for this Java related command."
================(Build #4201 - Engineering Case #304960)================
When making a connection to an ASA remote server, via ODBC, the Remote Data
Access layer now names the remote connection ASACIS_? where "?" gets replaced
with the connection id of the local connection. This feature is useful if
a customer needs to drop the remote connection in order to cancel a remote
request.
================(Build #4201 - Engineering Case #304975)================
When attempting to create a proxy table to a MS SQL Server table that had
a uniqueidentifier column, the Remote Data Access layer would fail with an
unsupported datatype error. As of this change, the proxy table now successfully
gets created with the uniqueidentifier column being mapped to a local column
with a user-defined data type of uniqueidentifierstr, who's base thpe is
char(36). Hence querying the uniqueidentifierstr column will force the SQL
Server ODBC driver to convert the uniqueidentifier column to a string. Users
can then use strtouuid to map the uniqueidentifiersrt to a uniqueidentifier.
The sa_migrate scripts have also been modified such that migrating a SQL
Server table with a uniqueidentifier column will result in creating a base
table that also has a uniqueidentifier column. The migrate scripts will handle
converting the uniqueidentifiersrt to a uniqueidentifier prior to inserting
the value into the base table.
================(Build #4202 - Engineering Case #303278)================
An INSERT ... SELECT statement, which contained a UNION involving a non-existent
user-defined function, would have caused a server crash when executed. This
problem has now been fixed and a "procedure not found" error message will
now be returned.
================(Build #4203 - Engineering Case #305313)================
The database server could report a "Fatal error: database error" when a sequential
scan was being performed on a temporary table using group reads (which requires
a database initialized with 8.0.0 or later and the table must be 'large').
The problem would have occurred very infrequently and would only have occurred
if the temporary table pages were located near the end of the temporary file.
This problem has now been fixed.
================(Build #4204 - Engineering Case #300081)================
Executing a CREATE SERVER statement with an unsupported class on NetWare
would have succeeded, but executing a CREATE EXISTING TABLE on that server
would have caused the NetWare server to abend. This has now been fixed. Note
that 'asajdbc' is the only supported class on NetWare.
================(Build #4204 - Engineering Case #305902)================
It was possible that any of the following commands could have written the
OPTION clause incorrectly to the database:
CREATE SYNCHRONIZATION USER, ALTER SYNCHRONIZATION USER, CREATE
SYNCHRONIZATION SUBSCRIPTION, ALTER SYNCHRONIZATION SUBSCRIPTION, CREATE
SYNCHRONIZATION SITE, ALTER SYNCHRONIZATION SITE, CREATE SYNCHRONIZATION
DEFINITION, ALTER SYNCHRONIZATION DEFINITION, CREATE SYNCHRONIZATION
TEMPLATE, ALTER SYNCHRONIZATION TEMPLATE.
This has now been fixed.
================(Build #4206 - Engineering Case #302606)================
Support for the Borland compiler was missing in 8.0.x releases. In particular:
- compiling dbtools.h failed
- the dbtools and dblib import libraries (dbtlstb.lib and dblibtb.lib) were
missing
- samples\asa\c\makeall.bat failed to build using the Borland compiler.
This has been fixed. Note that due to a limitation of newer Borland linkers,
Borland compiler built Embedded SQL applications must now compile and link
src\sqlcadat.c (in the Adaptive Server Anywhere install directory) into each
Embedded SQL application. This is in addition to linking against dblibtb.lib.
================(Build #4206 - Engineering Case #305913)================
If the following procedure was created using Sybase Central, a syntax error
would result when trying to rebuild the database:
create procedure mytest() as
select 1
go
Note that the procedure ends with a SELECT statement having no FROM clause.
The server treats the "go" as an alias for the last SELECT list item; otherwise,
a syntax error would be given when trying to save the procedure definition
in Sybase Central. The "go" is included in the preserved-source string and
causes a syntax error on rebuild.This problem is similar to issue 302757.
The trailing "go" is now removed when saving the procedure definition.
.
================(Build #4208 - Engineering Case #303630)================
On Windows platforms other than CE, if no environment variable was set to
define the location for ASA temporary files, a database could not have been
started. The temporary file will now be created in the directory where the
server was started.
================(Build #4208 - Engineering Case #306763)================
If the TCPIP parameter MyIP, was not a valid IP address, (or the keyword
NONE), (i.e. "-x tcpip(MyIP=mypcname)"), a value of 255.255.255.255 was used
instead. This has been fixed, only IP addresses (in standard "dot notation")
or the word "none" are allowed. Anything else will give an error on startup.
================(Build #4208 - Engineering Case #306988)================
Starting a server with both the -x tcpip(DoBroadcast=NO) and -sb 0 switches
would have caused it to crash. This has been fixed.
================(Build #4209 - Engineering Case #303384)================
If a remote table A had two separate foreign key columns c1 and c2, both
referencing table B. the sa_migrate system procedure would have attempted
to import both foreign keys as a single foreign key relationship; ie (c1,
c2) references B(p1,p1), resulting in the error "foreign key p1 already
referenced". This problem has now been fixed.
================(Build #4209 - Engineering Case #303402)================
The sa_migrate system procedure would have migrated indexes incorrectly.
For example, given a table T with columns c1,c2,c3,c4,c5 with index i1 on
column c1, and index i2 on columns c2 and c5. The generated indexes for the
local table would have had two separate but identical indexes, i1 on columns
c1,c2 and c5 and i2 on columns c1,c2 and c5. This problem has now been corrected.
================(Build #4209 - Engineering Case #306324)================
Attempting to create an event with a "WAIT AFTER END" clause would have crashed
the server.
For example:
create event MyEvent
schedule MySchedule start time '10:00PM' on ('Thu','Fri')
handler
begin
backup database directory 'd:\\backup'
wait after end
transaction log truncate
end
This has now been fixed.
================(Build #4209 - Engineering Case #307415)================
When using the Remote Data Access feature to attempt to create a proxy table
to a Microsoft Access database, the server may have failed with a "table
not found message". This problem occurred when the file path of the Access
database was longer than 63 characters. The problem has now been fixed.
================(Build #4211 - Engineering Case #308011)================
The server could have crashed after warning of a fatal error, (most likely
due to an out of disk condition). This has been fixed.
================(Build #4212 - Engineering Case #306063)================
Inserting a row could have caused assertion 200601. This would only occur
when doing an insert using the 'ON EXISTING UPDATE' clause. The row being
inserted must not have been in the table (ie. no update happened). The table
to which the row was being inserted also needed to have been involved in
replication for the assertion to occur. The resulting database file was
not corrupt. This has been fixed.
================(Build #4212 - Engineering Case #306467)================
Calling sa_get_eng_properties on a operating system platform with a large
amount of memory, would have displayed properties like "MainHeapBytes" as
a negative number. This can also be demonstrated by calling the property()
function directly.
Example:
select property( 'MainHeapBytes' );
This has been fixed.
================(Build #4212 - Engineering Case #306580)================
Queries with LIKE predicates containing a NULL escape character were being
evaluated as if there was no escape character. Now LIKE predicates containing
a NULL escape character evaluate to NULL. This new behaviour matches the
ANSI standard.
================(Build #4212 - Engineering Case #308297)================
If a forward-only, read-only cursor was opened with a query that was optimized
to use a merge join, then the error:
-187 "Illegal cursor operation attempt" 09W02
could have been returned. For example, this error would be returned when
re-fetching the current row. This has now been fixed.
================(Build #4212 - Engineering Case #308323)================
If a SQL stored procedure was defined as an external java procedure and the
access modifier for the java method was private, calling the stored procedure
would have executed the method instead of giving an error. Now, the server
will report a PROCEDURE_NOT_FOUND error when such procedure is called.
================(Build #4212 - Engineering Case #308330)================
Attempting to connect, using jConnect, to a UTF8 database using a userid
which contained non-ASCII characters, would have failed. This has now been
fixed.
================(Build #4212 - Engineering Case #308440)================
Running 'TRUNCATE TABLE' on an already corrupted table could have resulted
in other database objects also becoming corrupted and would only have happened
in extrememly rare circumstances. 'VALIDATE TABLE' would have found the
corruption both before and after the 'TRUNCATE TABLE' command. Assertion
200607 has now been added to catch this case.
================(Build #4213 - Engineering Case #308632)================
Adaptive Server Anywhere allows one index on a table to be specified as clustered.
This specification can now be changed via the ALTER INDEX statement without
the need to recreate the index. Note that this statement only changes the
specification of the index but does not reorganize the data. So, if the clustered
index on a table is changed, then depending upon the key composition for
the old and the new clustered index, the data may no longer be clustered
on the new clustered index. Remember that the clustering of an index in ASA
is only a hint to the server and the clustering of data is not guaranteed.
However, if so desired, clustering can be restored by using the REORGANIZE
TABLE statement.
================(Build #4214 - Engineering Case #295384)================
Indexes and foreign keys explicitly created by a user can now be renamed
with an ALTER INDEX statement:
ALTER INDEX indexname rename-spec
ALTER [INDEX] FOREIGN KEY rolename rename-spec
rename-spec:
ON [ owner.]tablename RENAME [ AS | TO ] newname
================(Build #4214 - Engineering Case #308663)================
If the option wait_for_commit was on, it was possible to commit a transaction
that had inserted a foreign key with no matching primary key. For this to
have happened, all of the following conditions must hold:
- the foreign key in question must have been larger than any existing primary
key,
- another transaction (also with the wait_for_commit option on) must have
inserted the same foreign key before the first transaction commited, and
must have done so while the foreign key was greater than any existing primary
key,
- between the two foreign key insertions, enough primary keys must have
been added to fill the last leaf page and at least one of these keys must
be subsequently deleted (before the second foreign key insertion).
- the index used to enforce the constraint must have been a uncombined (split)
comparision-based index.
This has now been fixed.
================(Build #4214 - Engineering Case #308770)================
If the length of the view's name in a CREATE VIEW statement exceeded 119
bytes, the server would have crashed. Now names of up to 128 bytes are allowed.
If the size of the name is over 128 bytes, the error 'Identifier ... too
long' will be reported.
================(Build #4214 - Engineering Case #308844)================
The dbtsinfo utility would have always displayed NULL for a database property
called PatriciaTrees. This property had been replaced by one called CompressedBTrees,
but dbtsinfo continued to refer to the old name. This is now fixed.
================(Build #4214 - Engineering Case #309015)================
When an event is executed, it runs on its own connection. If an event connection
was dropped (either from Sybase Central or using the DROP CONNECTION statement),
the server would have crashed, if it had been started with the -z switch.
This has now been fixed.
================(Build #4215 - Engineering Case #309296)================
In the presence of concurrent DDL or LOCK TABLE operations, it was possible
to commit a transaction that had inserted a foreign key with no matching
primary key. For this to have occurred, all of the following conditions
must have been true:
- the committing transaction must have deleted a primary row for which there
were foreign references
- another transaction must have had the foreign table's schema locked exclusively
(via DDL or LOCK TABLE)
- the DDL or LOCK TABLE operation must have been the first use of the foreign
table since it was loaded
- the committing transaction must have had blocking off (or possibly be
involved in a deadlock)
- the index used to enforce the constraint must have been an uncombined
(split) index.
This problem, which could also have resulted in a server crash, is now fixed.
================(Build #4215 - Engineering Case #315471)================
A new built-in function has been added with the following syntax:
DB_EXTENDED_PROPERTY ( { property_id | property_name },
[, property-specific_argument
[, { database_id | database_name } ] ] )
This new function, db_extended_property(), is similar to db_property(),
but it allows an optional property-specific string parameter to be specified.
The interpretation of the property-specific argument depends on the property
id or name specified in the first argument. Calling db_extended_property(
x ) is equivalent to calling db_property( x ).
Two new properties have been added: FileSize and FreePages. Each of these
properties can take an optional argument which specifies the dbspace for
which the property is being requested.
The dbspace can be specified as any of the following:
- the name of the dbspace
- the file_id of the dbspace
- 'translog' to refer to the transaction log file
- 'temp' to refer to the temp file
- 'writefile' to refer to the write file
If the dbspace does not exist, the property function will return null. If
the name of a dbspace is specified and an id or name of a database which
is not the database of the current connection is also specified, the function
will also return null (since it is not possible to query the system tables
across databases) except that the well-known name 'system' will still be
accepted.
FileSize returns the length of the specified database file in pages. For
the system dbspace on databases created with 8.0.0 or later, FileSize includes
the size of the checkpoint log which is located at the end of the database
file.
FreePages returns the number of free pages in the specified file. FreePages
is only supported on databases created with 8.0.0 or later. For the transaction
log, FreePages returns the number of completely empty, unused pages between
the current logical end of the log and the physical end of the log file.
When using a write file, FreePages on a dbspace will return the number of
free pages in the virtual dbspace represented by writefile. FreePages on
the writefile itself returns the number of pages free in the writefile that
can be used to create new mapped versions of database pages. That is, pages
may appear free to a dbspace but, from the point of view of the write file
itself, they are actually allocated and in use inside the write file as images
of pages.
If no property-specific argument is provided for either of these properties,
the system dbspace is assumed.
================(Build #4216 - Engineering Case #308855)================
When connecting to an authenticated ASA remote server using Remote Data Access,
the connection would not have been auto-authenticated. As a result, the connection
would become read only after the 30 second time out. This problem has now
been resolved and all connections to ASA remotes are now auto-authenticated.
Note that the Remote Data Access class must be either asaodbc or asajdbc.
================(Build #4216 - Engineering Case #308965)================
A new option, OPTIMISTIC_WAIT_FOR_COMMIT, has been added to aid in migrating
5.x applications to 8.x. By default, OPTIMISTIC_WAIT_FOR_COMMIT is 'off',
when it is set to 'on' locking behaviour when WAIT_FOR_COMMIT='on' is changed
as follows:
- no locks are placed on primary key rows when adding an orphan (fk row
with no matching pk row)
- if a transaction adds a primary row, it will be allowed to commit only
if the transaction has exclusive locks on all foreign rows that reference
the primary row at the time the primary row is added.
This option is meant to mimic 5.x locking behaviour when transactions add
foreign rows before primary rows (with the proviso that no two transactions
concurrently add foreign rows with the same key value). Note that this option
is not recommended for general use as there are a number of scenarios in
which transactions will (counterintuitively) not be allowed to commit, including:
- if transactions concurrently add foreign rows with the same key value
when the primary row does not exist, at most 1 of the transactions will be
allowed to commit (the one that adds the corresponding primary row).
- if a transaction deletes a primary row and then adds it back, it likely
will not be allowed to commit (unless it has somehow obtained exclusive locks
on all of the matching foreign rows).
================(Build #4217 - Engineering Case #309658)================
Database validation would fail to recognize missing entries in a PK with
respect to a FK. Validation was not finding missing entries in a primary
key when checking a foreign key. As a result, it was possible to get referential
integrity violations while rebuilding a database or index even though validation
showed the database was fine. The validation now occurs as documented.
That is, for foreign key indexes, validation ensures that the corresponding
row exists in the primary table. As a result of this change, database validation
may now take longer.
================(Build #4218 - Engineering Case #309611)================
If a column having a default value was modified using Sybase Central to remove
the default, the catalog change would have been made immediately, but inserts
into the table would continue to use the default until the database was restarted.
This has now been fixed.
================(Build #4218 - Engineering Case #309653)================
Executing an UPDATE WHERE CURRENT OF cursor statement in a batch or stored
procedure would have caused the server to hang, if the cursor was not updateable
and ansi_update_constraints was on. This is now fixed.
================(Build #4218 - Engineering Case #309847)================
An expression such as "x LIKE y" would have caused the server to crash if
'y' evaluated to a string longer than the database page size (approximately).
Now, the server will report the error, SQLSTATE_PATTERN_TOO_LONG if 'y' is
longer than one page.
================(Build #4221 - Engineering Case #310638)================
A table containing an indexed numeric column could have become corrupt if
the value of the precision option was changed to a value smaller than that
of some of the values that existed in the table, and rows containing such
values were subsequently modified. This has been fixed.
================(Build #4222 - Engineering Case #310784)================
The persistent index statistics could have been incorrect after an index
reorganization. This problem only affected trie-based indexes and has now
been fixed.
================(Build #4223 - Engineering Case #308177)================
In rare cases, it was possible that the server could have crashed when a
TCP or SPX connection was being dropped due to a liveness or idle timeout,
or through use of the DROP CONNECTION statement. This would only have happened
on Win32 platforms and has now been fixed.
================(Build #4223 - Engineering Case #310932)================
Queries containing the NUMBER(*) function and an EXISTS predicate could have
returned incorrect values for the NUMBER(*) function, if the EXISTS sub-query
was flattened. The incorrect values would have been zero for each row. After
this fix, the correct row number is returned.
================(Build #4224 - Engineering Case #310279)================
In certain circumstances, if a sort operation ran out of memory, rows could
have been incorrectly omitted from the result of a query. In the case where
rows were omitted, the QueryLowMemoryStrategy property would be incremented.
This has been fixed.
================(Build #4225 - Engineering Case #311099)================
When describing a read only cursor result set of a stored procedure or batch
which has references to tables contained in publication articles, the error
"QOG_BUILDEXPR: could not build EXISTS" could have been reported. THis is
now fixed.
================(Build #4226 - Engineering Case #310881)================
When altering a table involved adding a column with a computed or default
value, row locks were obtained which, if table was large, could have caused
the operation to fail. The row locks were redundant, since there already
was an exclusive schema lock. The row locks are no longer obtained.
================(Build #4226 - Engineering Case #311851)================
Scrolling through a trie-based index to before the first row and then scrolling
forwards, could have resulted in the scan terminating prematurely. This is
now fixed.
================(Build #4228 - Engineering Case #312221)================
A permissions problem with derived tables has been fixed.
================(Build #4228 - Engineering Case #312241)================
A PRINT statement could have displayed an unreadable message in the server
console or log file if the database's character set(for example utf8) was
different from the OS's character set ( for example Japanese CP932 ). This
has been fixed.
================(Build #4229 - Engineering Case #311850)================
When creating a stored procedure which returned a result set and the result
set datatypes were not explicitly specified, if one of the result set columns
was NUMERIC and the precision of the column was calculated as greater than
128, a server crash would have resulted. This has been fixed. A workaround
is to explicitly specify the result set datatypes.
================(Build #4229 - Engineering Case #312098)================
The server could have attempted to update the wrong database page. If a number
of transactions, (two or more), were concurrently reading a page, and one
transaction updated the page, there was a very small, but non-zero, chance
that it might have updated the wrong page. In many cases this would have
resulted in an assertion indicating a page number or page type mismatch.
This would likely only have occurred on Unix and multiprocessor (hyperthreaded)
NT systems. This is now fixed.
================(Build #4233 - Engineering Case #301210)================
If a procedure which is no longer valid (eg. syntax in the procedure is no
longer supported), then preparing the procedure for execution via ODBC would
have caused the server to crash. This has been fixed and the error message
'procedure is no longer valid' will now be returned to the application.
================(Build #4234 - Engineering Case #314168)================
Assertions 100307 or 101412 could have occurred with encrypted databases
when reading pages from the temporary file or a dbspace. This has been fixed.
================(Build #4234 - Engineering Case #314279)================
When sequentially scanning a large table, there was the possibility of cache
corruption when the scan completed. This was unlikely to be observed on
single processor NT/W2K/XP platforms, and was likely to be rare in any case.
It is now fixed.
================(Build #4236 - Engineering Case #310700)================
When purging locks from the lock table, the lock table entries could have
become misordered. It was unlikely that this would have caused problems
other than assertions 200300, 200301 or 200302. For this problem to have
occurred, there must have been active keyset cursors and the database either
must have had more than 1 dbspace, or more than 1,000,000 pages in the system
dbspace. This is now fixed.
================(Build #4236 - Engineering Case #311195)================
In versions of ASA earlier than 8.0.0, assertions 101201, 104802 and 104803
could have occurred while defragmenting a database log file, if it was in
use by a server running with the -m command line option to truncate the log
file at a checkpoint. In versions 8.0.0 and later, assertions 100908, 100909
and 104802 could have occurred in this same situation.
When running with the -m option, the log file is truncated at each checkpoint,
but this truncation could not have occurred if the file was being defragmented
concurrently. This has now been fixed.
Note: Use of the -m switch is not advised, please read the documentation
before using this server command line switch. To avoid database file fragmentation,
it is recommended that where this option is used, the transaction log be
placed on a separate device or partition from the database itself.
It is also not recommended to defragment any database files while they are
in use.
================(Build #4236 - Engineering Case #312388)================
Severs running on Win32 multi-processor systems could have hung, crashed
or failed with an assertion, due to a bug which allowed two threads to be
active simultaneously in a critical section. This issue was unlikely to
have appeared on single processor systems. It has now been fixed.
================(Build #4237 - Engineering Case #240450)================
If a user was using a view or executing a procedure owned by a different
user, and then the permission to use the view or procedure was revoked, the
user could still have used the view or procedure until the server was restarted.
This has been fixed - the revocation now takes place immediately.
================(Build #4237 - Engineering Case #311055)================
If a computed column referenced another column which appeared in a CHECK
constraint, the computed column could be incorrectly set to NULL.
For example, the following table definition would have caused the problem:
create table T(
x integer check( 2*x >= 0 ),
y integer COMPUTE (2+x)
)
An UPDATE statement that modified 'x' (to a non-NULL value) would have left
'y' set incorrectly to NULL. This problem has now been fixed.
================(Build #4237 - Engineering Case #313209)================
For very complex WHERE clauses in disjunctive form, new IN predicates are
now generated that can be used as sargable predicates. For an IN predicate
of the form "T.X IN ( constant_1, constant_2, ...)" to be generated, it
is necessary to have in each term of the disjunction, a predicate of the
form "T.X = constant_i". In the example below, the query Q1 is transformed
into the query Q2 where two new sargable IN predicates were generated.
Example:
Q1:
select *
from T
where (T.X = c1 and T.Y = c2) or
(T.X =c3 and T.Y = c4) or
(T.X = c5 and T.Y = c6) or
(T.X =c7 and T.Y = c8) or
(T.X = c9 and T.Y = c10) or
(T.X =c11 and T.Y = c12)
Q2:
select *
from T
where T.X IN ( c1, c3, c5, c7, c9, c11) and T.Y IN (c2, c4, c6, c8, c10,
c12)
and ((T.X = c1 and T.Y = c2) or
(T.X =c3 and T.Y = c4) or
(T.X = c5 and T.Y = c6) or
(T.X =c7 and T.Y = c8) or
(T.X = c9 and T.Y = c10) or
(T.X =c11 and T.Y = c12))
================(Build #4237 - Engineering Case #314830)================
Passing large blobs to external functions was slow, as an "optimization"
for quickly reading blobs was not being used. There was also a possibility
of incorrect results when accessing blobs piecewise. It was possible that
if the first fetch was for exactly 255 bytes, the second fetch would have
returned the first n bytes again, not the next n bytes (following the first
255). These two problem have now been fixed.
================(Build #4239 - Engineering Case #314828)================
Error strings returned from the server through a TDS connection, could have
had garbage characters preceding the message. This has been fixed
================(Build #4239 - Engineering Case #315386)================
In certain rare situations, the database server could have crashed while
running dbunload with the -ar, -an or -ac commandline options. This problem
has now been fixed.
================(Build #4240 - Engineering Case #315543)================
If a client's character set did not match the database's character set, the
embedded SQL GET DATA statement could have returned incorrect results. This
is now fixed.
================(Build #4240 - Engineering Case #315556)================
Complex queries with sargable IN predicates may have had a less than optimal
plan, due to the optimizer underestimating the size of intermediate result
sets.
For example, if a table T has a primary key on the columns (T.A, T.B) and,
in a complex query, the predicates "T.A = constant" and "T.B IN ( constant1,
constant2, ..)" are used to create a partial index scan on (T.A, T.B), the
number of rows returned by the partial index scan may have been underestimated.
This has been corrected so that the optimizer now calculates a reasonable
estimate.
================(Build #4241 - Engineering Case #308876)================
If CHAINED was set to off (as is the default with TDS connections) and a
statement was closed, then all locks were not released even though they should
have been. This problem has now been fixed.
================(Build #4241 - Engineering Case #312222)================
If a table in the FROM clause of an EXISTS or IN subquery had a local predicate
of the form "T.X = T.Y", the query may have returned an incorrect result
set. For this to have happened, T.X or T.Y must be in the select list or
be used in another predicate of the form "T.X = outer reference column".
For xample:
select *
from R, S
where R.X IN ( select T.X
from T
where T.X = T.Y )
and S.Z = R.Z
================(Build #4241 - Engineering Case #313305)================
When run on Solaris, the server could have crashed on a CREATE PROCEDURE
statement if it included a statement like: UPDATE ... SET ... FROM, and the
FROM clause
had more than 4 tables. This has been fixed.
================(Build #4242 - Engineering Case #294269)================
If the PCTFREE value for a global temporary table was changed via the ALTER
TABLE statement, the changed value did not persist and was lost at the end
of the connection. This has been fixed and the changes will now persist.
================(Build #4242 - Engineering Case #316016)================
If a query in a stored procedure contained a builtin function that was used
either with no parameters or only constant parameters, then the function
might only have been evaluated periodically, not every time that the query
was executed. This was a result of plan caching, which cached the results
of evaluated functions. For deterministic builtin functions, the caching
did not change behaviour as the same answer would be returned for every execution
of the function. However, for some functions such as rand() or connection_property(),
the value of the function could be different even though the identical parameters
were supplied. The caching logic has been changed so that functions which
might return different values for the same inputs are now no longer cached.
When using a reusable cursor, expressions over these builtin functions may
now be slightly less efficient.
================(Build #4242 - Engineering Case #316073)================
Histogram updates during query processing can, in some rare and yet undetermined
circumstances, cause some selectivity estimates to become invalid. This in
turn can cause inefficient query plans to be chosen. The problem can be detected
by looking at the selectivity estimates in query plans, the estimates can
show up as values such as -1.IND#. The server will now clean up these invalid
estimates when loading histograms.
================(Build #4242 - Engineering Case #316106)================
If the Java VM failed on server startup, the server might then have crashed
when shutdown. This has now been fixed.
================(Build #4243 - Engineering Case #306639)================
Executing a MESSAGE statement while in passthrough mode using DBISQL would
have caused a server crash, if the DBISQL connection was made using jConnect.
An error (-707 Statement is not allowed in passthrough mode) is now reported.
================(Build #4243 - Engineering Case #316408)================
Executing a CREATE DATABASE statement while the server was under a heavy
load, could have resulted in the server hanging. This has been fixed.
================(Build #4244 - Engineering Case #314725)================
If a procedure generated a result set using a SELECT on a view, and the view
was on a table that was part of a SQL Remote publication, and the first call
to the procedure was made using a read-only cursor, subsequent calls to the
procedure could have resulted in a "column not found" error. This error referred
to columns in the SUBSCRIBE BY part of the publication definition. Now, subsequent
calls to the procedure will treat the SELECT as read-only.
================(Build #4244 - Engineering Case #316471)================
If an error occurred while transferring data between the old and new databases
using one of the dbunload command-line options -ac, -an or -ar switches,
dbunload could have hung and the server would have used 100% of one CPU.
Now dbunload will report the appropriate error message(s).
================(Build #4245 - Engineering Case #316231)================
If a procedure definition contains a percent sign, '%', it is treated either
as the modulo operator or a comment delimiter, based on the setting of the
Percent_as_comment option. If this option was set to OFF before such a procedure
was created, the percent sign would have been stored in the catalog; otherwise,
the it would have been changed to the double dash comment delimiter. If a
procedure definition stored in the catalog contained a percent sign and the
first user to call the procedure had Percent_as_comment set to ON, the procedure
may have failed to load or may have performed incorrectly. Also, rebuilding
the database may have failed, as the percent sign would have been treated
as a comment delimiter and could have resulted in a syntax error. This has
been fixed. Existing procedures containing a percent sign should be re-created
after this fix before attempting to rebuild the database. A workaround is
to use the remainder() function instead of using the percent sign as modulo
operator.
================(Build #4245 - Engineering Case #316673)================
If a table is locked exclusively with LOCK TABLE ... IN EXCLUSIVE MODE, the
server will by default, no longer acquire row locks for the table. This
can result in a significant performance improvement if extensive updates
are made to the table in a single transaction, especially if the table is
large relative to cache size. It also allows for atomic update operations
that are larger than the lock table can currently handle (approx. 2 - 4 million
rows).
================(Build #4245 - Engineering Case #316688)================
When using the JDBC-ODBC bridge to select a variable of type java.math.BigDecimal,
the value would have been returned as a binary instead of as a string.
For example:
create variable d java.math.BigDecimal;
set d = new java.math.BigDecimal(1000);
select d;
This problem has now been fixed.
================(Build #4245 - Engineering Case #316870)================
When a commit was done by a transaction while it is in the middle of a page
level backup, it could have caused database corruption if a subsequent operation
that required a page level undo (ie ALTER TABLE or LOAD TABLE on a table
with existing data) failed and a checkpoint was done while the operation
was in progress. This is now fixed.
Note that this is not likely to affect most users, since the existing backup
tools do not exhibit this behaviour, but it may have occurred with a user
written backup tool using the page level backup api.
================(Build #4245 - Engineering Case #316904)================
Attempting to create a proxy table to a remote table with a primary key column
which was a keyword would have failed. For example, attempting to create
a proxy table to a table that had a primary key column named "time" would
have failed. This problem has now been fixed.
================(Build #4245 - Engineering Case #316905)================
When using sa_migrate() to migrate a set of tables owned by a particular
user, the migration scripts would also migrate any views, global temporary
tables and system tables that were also owned by that user. The migration
scripts now only migrates base tables.
================(Build #4246 - Engineering Case #316976)================
If an expressions which never returns NULL, even when its arguments were
NULL, could have caused an assertion failure when used in a query containing
an outer join if hash join was selected as the execution method.
For example, the following query would have caused the problem to occur:
select e.emp_id ee, isnull( d.dept_head_id, 999 )de
from employee e left outer join department d on ee=de
order by emp_fname
Errors include the following: "Unknown Device Error", although other fatal
errors or assertion failures were possible. This has now been fixed.
================(Build #4247 - Engineering Case #315593)================
If a column was defined as NUMERIC and the GLOBAL AUTOINCREMENT value generated
for an inserted row would have been greater than 2**31, the value assigned
to the column and the value returned for @@identity were incorrect. This
has been fixed. Note that the use of BIGINT is recommended in this situation
for improved efficiency.
================(Build #4247 - Engineering Case #316104)================
If an event handler was running when an attempt was made to upgrade a database,
the upgrade would have failed to start, since another connection was active.
Now the upgrade will wait for the event handler to complete before proceeding.
================(Build #4248 - Engineering Case #317464)================
When running the server with the -q switch, if an error occurred during startup,
a messagebox containing the error would still have appearred. Now, the error
dialog will not be displayed and the engine will silently fail to start.
However, if the server is running as a service, the error message will be
logged to the application event log
================(Build #4248 - Engineering Case #317536)================
The COMMENT ON COLUMN statement could sometimes have set an incorrect comment
string. This would have been more likely to occur if the statement appeared
as part of a batch containing other strings. The comment will now be set
correctly.
================(Build #4249 - Engineering Case #315340)================
The server could crash while executing a remote procedure call that takes
input parameters. This would only occurred if the remote server was using
the ODBC server class This has been fixed.
================(Build #4250 - Engineering Case #315474)================
When run on an AIX machine using IBM's Power4 processor, the server could
have hung or crashed. When the server would have hung, the CPU usage would
have gone to 100%. This problem would only have bee seen on SMP machines.
The P/630, P/670, and P/690 machines are some of the IBM machines that currently
use the Power4 chip. This has now been fixed, but a workaround is to use
the bindprocessor command to bind the ASA server processes to one CPU.
================(Build #4250 - Engineering Case #317780)================
When using a keyset-driven (value sensitive) cursor, attempting to fetch
from the cursor after an error was reported on a previous fetch could have
caused the server to crash. This has been fixed, and now fetches on any cursor
types, after an error has been reported on a prior fetch, return the error:
"Cursor not in a valid state", -853
================(Build #4250 - Engineering Case #317965)================
The STR function could have returned invalid data or crashed the server,
if the numeric expression parameter was less than -1E126. This has been fixed.
================(Build #4252 - Engineering Case #318451)================
When using a multi-row (wide) insert, if the first inserted row had NULL
host-variables and also unspecified columns (defaulting to NULL), then the
second inserted row could insert incorrect values into the unspecified columns.
It was also possible for the server to write bad data to the transaction
log, resulting in a corrupt transaction log.
================(Build #4252 - Engineering Case #318632)================
If fetching through a cursor backwards resulted in the cursor being positioned
before the start of the result set, fetching forwards again might have returned
error 100, "row not found", not the first row. The could only have happened
if the query useed a comparison-based index scan which was bounded below
(typically indexed_value >/>=/= constant). This is now fixed.
================(Build #4253 - Engineering Case #317852)================
A stored procedure or function that concatenating integers as strings, could
have caused a server crash.
For example:
declare x integer;
set x = 888;
set x = x || 888;
This is now fixed.
================(Build #4253 - Engineering Case #318803)================
Using the string concatenation operator ( || ) could have caused the server
to crash or return a runtime error. This whould only have occurred if one
of the strings was a long varchar.
Code of the following form could have caused the problem:
declare x long varchar;
select a into x from test where id = 1;
set x = x || ', this should fail with an error';
as a workaround use (it's actually more efficient):
declare x long varchar;
select a || ', this shouldn't fail' into x from test where id = 1;
This is now fixed.
================(Build #4253 - Engineering Case #318840)================
The statement, ALTER DBSPACE ... ADD ... which grows a dbspace, would not
have executed if other connections to the server existed. The error 42W19
(-211) "Not allowed while '<username>' is using the database" would have
been reported. The statement is now allowed to execute when other connections
to the server exist.
================(Build #4254 - Engineering Case #317082)================
After an application connected to a local engine or server via the shared
memory, the local application could not be killed using the 'End Task' action
in the task manager -- even if the user was an Administrator. The problem
was introduced by the change for QTS 310658 (Non-administrative user could
kill database server started as LocalSystem NT service). The problem was
reproduced on Windows 2000 Professional SP3 but could not be reproduced on
Windows 2000 Server SP3. This has now been corrected.
================(Build #4254 - Engineering Case #319085)================
If the first query referencing a table after a database was started contained
an error in the WHERE clause, subsequent queries would return the error:
Cannot find an index for field '<column name>'.
Every field must have at least one index. Current IndexCount = 0.
The database would have needed to be restarted to correct the problem. This
has been fixed.
================(Build #4255 - Engineering Case #319250)================
Calling the system function "property( 'PlatformVer' )" would have incorrectly
returned 'Windows .Net build ...' on Windows 2003. Now it returns 'Windows
2003 build ...'.
================(Build #4255 - Engineering Case #319428)================
The database server, when run on NetWare, would have eventually stopped accepting
SPX connections, and may have hung on shutdown. This was due to the server
running out of memory and has been fixed.
================(Build #4256 - Engineering Case #318875)================
A change in the syntax has been made to the built-in function:
DB_EXTENDED_PROPERTY ( { property_id | property_name },
[, property-specific_argument
[, { database_id | database_name } ] ] )
For the properties FileSize and FreePages, the property-specific_argument
'temp' has been changed to 'temporary', so that there is consistent usage
for the temporary file across different statements. The orginal value 'temp'
is still allowed for existing usage.
================(Build #4260 - Engineering Case #320615)================
Validate Index ensures that each referenced row exists in the index, but
it did not ensure that each referenced row could be found. Now each row
is looked up by value. Note that Validate Table does check for this.
================(Build #4260 - Engineering Case #320616)================
Reorganizing a comparison-based index with many entries (more than 3,000,000
in a database with a 2k page size) would have corrupted it. This is now fixed.
================(Build #4261 - Engineering Case #321429)================
The 32-bit Windows support on Win64 for Itanium does not include Address
Windowing Extensions (AWE) and scattered reads. Attempting to create an AWE
cache in this environment would have failed with "Insufficient memory". Similarly,
if a scattered read were attempted in this environment, the IO would have
failed with the server reporting an assertion failure. Now, AWE caches are
not allowed in this environment and a conventional cache is used instead.
Scattered reads are simulated using a large contiguous buffer (as done on
other platforms).
Win64 for AMD64 supports both AWE and scattered reads for 32-bit executables.
Running the 32-bit engine on Win64 is not recommended. Running the native
64-bit executable is preferred.
================(Build #4264 - Engineering Case #315129)================
The server could have failed with assertion 101414 - ?AWE mapping failed",
even when Address Windowing Extentions were not being used, (ie -cw). This
was unlikely to occur on uniprocessor systems, so forcing the engine to run
on a single processor might be a work around. This is now fixed.
================(Build #4266 - Engineering Case #318985)================
Database corruption, revealed by assertion messages, could have been possible
when executing a "LOAD TABLE" statement that failed (eg. due to a duplicate
primary key). This is now fixed.
================(Build #4266 - Engineering Case #320043)================
Calling the CSCONVERT function with a string larger than the database page
size, would have crashed the server. CSCONVERT is called by some of the
external system functions like XP_SENDMAIL. As a result, a call to XP_SENDMAIL
with a message body larger than the size of a database page would likely
have crashed the server. This has been fixed.
================(Build #4267 - Engineering Case #320645)================
The server could have crashed when searching a trie-based index for long
strings not in the column's domain. Typically for this to occur, the string
would have to have been greater than 255 characters in length.
================(Build #4268 - Engineering Case #321729)================
When running the Win32 server or client software on a Win64 platform, the
SPX port would not have been started. The SPX port is now disabled for 32-bit
software running on 64-bit platforms.
================(Build #4268 - Engineering Case #321743)================
If a database or transaction log file was NTFS-compressed, the file fragment
count could have been incorrect. NTFS-compressed file are now handled correctly.
Note, the file fragment count is available through the system functions
db_property('DBFileFragments') and db_property( 'LogFileFragments' ), and
can be displayed in a warning at startup if the fragment count is high.
================(Build #4268 - Engineering Case #321812)================
A new server-level property, "NativeProcessorArchitecture", has been added.
On platforms where a processor can be emulated (such as X86 on Win64), this
property returns a string that identifies the native processor type. In all
other cases, property( 'NativeProcessorArchitecture' ) will be equal to property(
'ProcessorArchitecture' ).
For example, the following values are returned:
For the 32-bit (X86) Windows NT engine running on Win64 for Itanium
Property( 'NativeProcessorArchitecture' ) returns 'IA64'
Property( 'ProcessorArchitecture' ) returns 'X86'
For the 32-bit (X86) Windows NT engine running on Win64 for AMD64
Property( 'NativeProcessorArchitecture' ) returns 'AMD64'
Property( 'ProcessorArchitecture' ) returns 'X86'
In all other cases, property( 'NativeProcessorArchitecture' ) returns the
same value as property( 'ProcessorArchitecture' )
================(Build #4271 - Engineering Case #322005)================
After installing Service Pack 6 for NetWare 5.1, or Service Pack 3 for NetWare
6.0, the server would no longer detect if other servers are running with
the same name. If -z was specified, messages would have appeared in the console
saying "Could not enable broadcasting" and "Broadcast send failed". This
was due to a bug in BSDSOCK.NLM shipped with these two service packs. The
ASA server will now detect this bug and work around it, and will display
a message on the ASA console to that effect. We recommend that a newer version
of BSDSOCK.NLM be downloaded from Novell when it is available.
================(Build #4271 - Engineering Case #322348)================
A procedure containing a comparison of a variable with a subquery involving
proxy tables would have resulted in the error:
OMNI cannot handle expressions involving remote tables inside stored procedures
An example of this situation is:
create procedure p()
begin
declare myvar int;
set myvar = 1;
if myvar = (select col from remote_table where pk = 45) then
message 'found'
else
message 'not found'
end if;
end
This statement will now execute correctly.
================(Build #4272 - Engineering Case #322265)================
TDS connections (ie jConnect) to Turkish databases (for example, using collation
1254TRK) would have failed with the error,
ASA error -200: Invalid option 'AUTOMATIC_TIMESTAMP' -- no PUBLIC setting
exists
This has been fixed.
================(Build #4273 - Engineering Case #320389)================
Index scans on databases create with a version prior to 5.0, (that may have
been upgraded), could have returned too many rows. This problem only affected
indexes that were not fully hashed. This problem has been fixed. The recommended
work around would be to do an unload/reload of the database, but rebuilding
the affected indexes would do as well.
================(Build #4273 - Engineering Case #321160)================
The error, "-189 - Unable to find in index '{index name}' for table '{table
name}', would have been reported whenever INSERT ... ON EXISTING UPDATE was
used on a table having computed columns and the computed column expressions
referenced an indexed column.
For example:
create table T1(pk int not null primary key ,csc int null compute (pk+0));
insert into T1 (pk) Values(3);
commit;
insert into T1 (pk) on existing update values(3);
This has now been fixed.
================(Build #4273 - Engineering Case #321666)================
Sequential table scans at isolation levels 1 and 2 did not block on uncommitted
deletes from other transactions, but skipped the rows. READ_PAST_DELETED
has now been added which changes this behaviour. When ON, (the default),
sequential scans at isolation 1 and 2 will skip uncommitted deleted rows
as before, but when OFF, sequential scans will block on uncommitted deleted
rows at isolation levels 1 and 2, until the deleting transaction commits
or rolls back.
================(Build #4273 - Engineering Case #322602)================
On all Unix platforms except Solaris and Linux, a semaphore is created to
test if semaphore sets are allowed on the current platform. Only on AIX,
this semaphore was not being deleted. This is now fixed..
================(Build #4273 - Engineering Case #322619)================
Connection parameters requiring character set conversion could have been
followed by garbage characters. Depending on the parameters affected (for
example userid or password), this could have caused the connection to fail.
This has been fixed.
================(Build #4275 - Engineering Case #305153)================
If an integrated logon was invalid (ie the logon request was coming from
a machine on a different domain and cannot be properly verified) then the
connection would seem to have succeeded even though the client would eventually
time out. The problem has been resolved and a proper error message is now
returned immediately.
================(Build #4275 - Engineering Case #307980)================
If a query used SELECT FIRST, the optimizer might have chosen a poor access
plan; in particular, the optimizer might have made a poor choice of the index
to use with a particular query. In the reported case, the customer had a
query of the form
SELECT FIRST *
FROM base_table
WHERE <condition>
ORDER BY <indexed column>
In this case, the <condition> in the WHERE clause was highly selective,
and could be used as a sargable predicate (that is, the engine could use
an index on base_table). However, because of a costing error, the optimizer
instead chose a plan utilizing the index on the column specified in the ORDER
BY clause, a much more expensive choice in real terms.
This has now been fixed, but as a workaround, one could replace SELECT FIRST
with SELECT TOP 1.
================(Build #4275 - Engineering Case #322102)================
Each time that a server starts a database that has been initialized with
8.0.x software (or later), it allocates a number of memory buffers that are
used to improve I/O performance. The size of these buffers is dependent
on the page size of the database, so the larger the page size, the larger
the piece of contiguous memory that is required.
If the server were to start a large number of databases that have been initialized
with a large page size, the server's address space may have been exhausted
or become fragmented and therefore fail to allocated one or more of these
buffers. In this situation, the server would have failed with assertion 101413.
The server will now attempt to reduce its memory usage (i.e. to not allocate
as many buffers) when a database is started read-only.
================(Build #4275 - Engineering Case #322888)================
The dbbackup utility (dbbackup -x) could have hung (even if there were no
outstanding transactions), until any connection did a commit, rollback or
disconnect. In order for this to occur, the backup had to take (or be blocked)
for longer than half of the checkpoint time. This has been fixed.
================(Build #4275 - Engineering Case #322910)================
If the MDSR encryption option was not installed, then running dbinit, (or
using 'CREATE DATABASE') and specifying MDSR encryption would have resulted
in the error "Database creation failed: ". The message now reads "Database
creation failed: MDSR is not available".
================(Build #4275 - Engineering Case #322912)================
If the server was started without a -n command line option to name it, but
the filename or alias of the first database listed was longer than 40 characters,
the server would have used that full filename or alias as the server name,
rather than truncating it to 40 characters. If dblocate or the Find Servers
feature of dbisql or Sybase Central was used and found this server, the server
could have crashed. This has been fixed.
================(Build #4275 - Engineering Case #323070)================
If a user made a Remote Procedure Call that involved numeric parameters,
then the call may have failed (if connected to ASE or any other Data Direct
ODBC datasource) or the parameters would not have been passed correctly.
The precision and scale would also not have been passed correctly to the
ODBC driver. This problem has now been fixed.
================(Build #4276 - Engineering Case #323179)================
A SELECT statement containing a CASE expression would have failed on big
endian platforms whenever the WHEN clause specified a constant expression
within the range -32768 to 32767 inclusive. This is now fixed.
================(Build #4278 - Engineering Case #323062)================
If a database was autostarted by a connection request, the engine should
have returned an "Unable to start database" error if there were already 255
databases started. The engine however did not return any error, even though
the connection was not established and the database was not started. This
problem has now been fixed.
================(Build #4278 - Engineering Case #323655)================
If a table was renamed to a name longer than 128 characters, the table would
have become inaccessible. Attempting to rename a table to a name longer than
128 characters will now generate an "identifier 'xxx...' too long" error.
================(Build #4278 - Engineering Case #323706)================
Specifying 'PCTFREE 0' on the CREATE TABLE statement had no effect and the
PCTFREE specification was left as the default value. With this change CREATE
TABLE will work as intended.
Note that specifying 'PCTFREE 0' on the ALTER TABLE statement works correctly
and can be used after CREATE TABLE to achieve the desired specification.
================(Build #4279 - Engineering Case #320239)================
If remote function was defined (using CREATE FUNCTION ... AT ...), and it
did not fully qualify the function name in the location string, then attempting
to call the remote function would have crashed the server. This problem has
now been fixed.
================(Build #4279 - Engineering Case #322436)================
The datatype of CURRENT PUBLISHER, as reported by
select exprtype('select current publisher',1)
would have been different depending on whether or not a Publisher was defined.
The data type will now be varchar(128) consistently.
================(Build #4279 - Engineering Case #323954)================
Attempting to start a database in read-only mode would have crashed the runtime
engine if the database had an associated transaction log. The runtime engine
will now start successfully. A workaround is to disable the transaction log
using "dblog -n ...".
================(Build #4279 - Engineering Case #323973)================
A server crash at an inopportune moment could have resulted in a corrupt
database. This was more likely to have occurred with 9.x servers, and with
8.x servers running 8.x databases. It was unlikely to have occurred with
8.x and earlier servers when running against 7.x or earlier databases. This
has been fixed.
================(Build #4280 - Engineering Case #324348)================
If a stored procedure contained remote queries, and the server encountered
an error while building a cursor on those remote queries, then the server
would have leaked memory. This has now been fixed.
================(Build #4281 - Engineering Case #322453)================
When a server with the Quiet Mode command line option (-q) was started as
a Windows service that was allowed to interact with the desktop, any MESSAGE
statemnts executed by the server caused the server window to be enabled.
================(Build #4281 - Engineering Case #324489)================
With recent changes made to improve semantic checking for GROUP BY queries,
and to ensure consistent results for very complex queries, it was possible
though unlikely that rewrite optimizations that were performed on such queries
may have lead to a server crash. This problem has been corrected.
For example, the following query
SELECT T4.col3 a1,T1.col2 a2,avg(DISTINCT T3.col2) a3,max(T4.col2) a4,
min(T2.col3) a5,count(T4.col2) a6,list(T2.col2) a7,
sum(T4.col3) a8,sum(T2.col1) a9,sum(T3.col3) a10,max(T4.col3) a11,
count(T2.col3) a12
FROM tab5 T1
JOIN tab4 T2 ON T1.col1 = T2.col2
RIGHT OUTER JOIN tab5 T3 ON T2.col3 = T3.col3
JOIN view5 T4 ON T3.col2 = T4.col3
WHERE ( T4.col1 + T3.col1 >= ALL
( SELECT T3.col3 - T1.col3 - T2.col2 - T3.col3 + T1.col2
+ T3.col3 + T2.col2 + 0 FROM tab5 T1
JOIN view4 T2 ON T1.col3 = T2.col3
JOIN tab3 T3 ON T2.col1 = T3.col2 WHERE T3.col2 <= 398 ))
OR (T1.col2 + 0 >= ALL
( SELECT T2.col3 - T1.col2 * 0 FROM tab3 T1
JOIN view5 T2 ON T1.col2 = T2.col3 WHERE ( T2.col3 - T1.col3
+ 1 = ANY
( SELECT T1.col2 * 0 FROM tab5 T1
JOIN tab1 T2 ON T1.col1 = T2.col3 ))
OR (T2.col3 <= 350 ) ) )
GROUP BY T4.col3, T1.col2
HAVING T4.col3 >= T4.col3
INTERSECT
SELECT T2.col2 + T1.col2 * 0 a1, T3.col2 + T1.col2 + 0 a2, T2.col2 * 0
a3, T1.col3 - T3.col1 a4, T1.col3 * T2.col3 a5, T1.col3 + T2.col3 -
T3.col3 - T1.col3 + 1 a6, T3.col1 + T1.col1 + T2.col1 * T3.col3 a7,
T2.col2 + T3.col1 a8, T1.col2 + 1 a9, T3.col2 + T1.col2 + 1 a10,
T2.col3 + 1 a11, T1.col3 - T2.col1 - T3.col3 + T2.col1 + T3.col2
+ T1.col3 + T2.col3 * 0 a12
FROM view3 T1
JOIN tab1 T2 ON T1.col2 = T2.col2
JOIN view3 T3 ON T2.col3 = T3.col2
WHERE T1.col2 <> 240
could have crashed the server; the rewrite optimization is the transformation
of the HAVING predicate T4.col3 >= T4.col3 into T4.col3 IS NOT NULL.
================(Build #4281 - Engineering Case #324622)================
Queries with outer joins, where there were predicates involving IN() or ALL()
conditions, may have been incorrectly rewritten as inner joins, producing
incomplete or incorrect results. This may have happened to IN() conditions
if there were multiple terms, at least one of which was not null-rejecting
and may have happened to ALL() conditions if the predicate was an equality
predicate and the ALL() returned no rows (in this case, the predicate is
automatically true).
Two examples:
SELECT T2.col3 a2
FROM qts324622 T2
RIGHT OUTER JOIN qts324622 T3 ON T2.col3 = T3.col3
WHERE T2.col2 = ALL
( SELECT col4 from qts324622 where col1 > col2 and col2 > col1 )
SELECT *
FROM qts324622 T1
LEFT OUTER JOIN qts324622 T2 ON T1.col1 = T2.col2
WHERE T1.col1 IN ( T1.col1, T2.col1 )
In both cases, the outer joins were rewritten as regular joins, causing
some rows to be missed. This has been fixed.
================(Build #4281 - Engineering Case #324675)================
The first call to an external stored procedure on a multiprocessor Windows
NT, 2000, XP or 2003 machine could have caused the server to crash. Subsequent
calls were safe. This has now been fixed.
================(Build #4282 - Engineering Case #323275)================
When connected to a database with the Japanese collation 932JPN, executing
a CREATE PROCEDURE, FUNCTION or TRIGGER statement, which had Japanese characters
with the second byte as \x7d, would have caused a syntax error (ASA error
-131). This problem has been fixed.
================(Build #4282 - Engineering Case #324627)================
When a server with the -qi command line option (quiet - no window and no
icon)) was started as a Windows servicep, any MESSAGE statemnts executed
by the server caused the System Tray icon to be displayed. This has been
fixed.
================(Build #4283 - Engineering Case #324683)================
If the Unicode translation library was missung (dbunic9.dll for 9.x, libunic.dll
otherwise) then calling the COMPARE() or SORTKEY() functions would have caused
the server to crash. Now the functions fail with an Invalid Parameter error,
without crashing the server.
================(Build #4284 - Engineering Case #320882)================
Writing a message to the server window with the MESSAGE statement which contained
Japanese characters, would not have displayed correctly on a Linux machine
running in a Japanese environment. This has been fixed, the euc-jp code
page is now supported.
================(Build #4284 - Engineering Case #322431)================
When creating a view, the server must store the CREATE VIEW statement with
all the referenced tables qualified with the owners. If a table appeared
multiple times in the view definition, the second and subsequent references
were not being quailified.
For example, when created by USER1
create view V1 as select * from A join B,
A join C
this view was stored as
create view V1 as select * from USER1.A join USER1.B,
A join USER1.C
As a result other users could not execute the view and a database rebuild
would have failed because table "A" could not be found. This has been fixed.
================(Build #4284 - Engineering Case #325000)================
If a query's WHERE clause satisfied the conditions given below, the server
would have crashed during optimization. This problem has now been fixed.
(1) the WHERE clause must have contained a tautology (found in the original
WHERE clause or generated by a rewrite optimization)
(2) there were at least two equijoin predicates referring to the same column.
(ie, there were at least two predicates of the form "T.col1 = R.col1 AND
T.col1 = S.col1").
For example:
T.col1 = R.col1
and
T.col1 = S.col1
and
( (T.col2 >=100000
and T.col2 < 2000000
and expr1 >= 1000)
or (T.col2 >=2000000
and expr1 >= 10000)
)
In this case a tautology was generated by one of the rewrite optimizations,
namely "T.col2 < 2000000 OR T.col2 >= 2000000".
================(Build #4284 - Engineering Case #325093)================
If the server was started with the -qs command line option, and a usage error
occurred, a usage dialog would have appeared on Windows platforms. The usage
dialog is now suppressed if -qs is on the command line. Note that including
-qs within @filename or @environment variable command line expansion will
not suppress the usage dialog.
Also, if a usage error now occurs, a message is appended to the -oe error
log file on all platforms.
================(Build #4286 - Engineering Case #320876)================
When using the Foreign Key wizard on Unix versions of Sybase Central, in
some instances the last foreign column selection chosen by the user would
be ignored. This has been fixed.
================(Build #4286 - Engineering Case #324400)================
The server could have crashed when optimizing a query that contained the
same table with two correlation names in the FROM clause with the primary
keys of the two correlations equated and a full outer join also present.
This problem has been fixed.
For example, the following query showed the problem.
SELECT 1
FROM ((tab1 T7 , tab1 T9)
FULL OUTER JOIN tab2 T10 ON T9.col3 = T10.col2)
LEFT JOIN tab2 T11 ON T10.col2 = T11.col3
WHERE T7.col1 = T9.col1
================(Build #4286 - Engineering Case #324897)================
If a DELETE TRIGGER statement failed, a subsequent DELETE of a row from the
table would have caused a server hang. For this to have occurred, the trigger
must have failed due to an attempt to delete a row that still had foreign
keys referencing it, The Wait_for_commit option must have been off, and the
table being deleted from must have had a trie-based index as the index with
lowest index id [e.g. the pk index if a pk exists]. This has now been fixed.
================(Build #4286 - Engineering Case #325403)================
Calling a proxy stored procedure (e.g. dbo.sp_remote_tables) would have resulted
in a heap page being left locked in the server's cache. If the procedure
was called many times, this could have exhausted the cache as well as cause
the temporary file to grow. This is now fixed.
================(Build #4287 - Engineering Case #322634)================
Fetching the last row of a query using an ESQL widefetch or ODBC fetch with
a rowset size greater than 1, could have been slow. Fetching the last row
in this case could have taken as long as fetching all previous rows. Fetching
rows one at a time could fetch the last row more quickly than when using
a widefetch.
This has been fixed so that the last fetch of a widefetch is now as quick
as the last row of a single row fetch.
Note if the cursor type is ESQL SENSITIVE or ODBC DYNAMIC, and isolation
level 1 is used, the widefetch of the last row may still be slower than a
single row fetch, due to the cursor stability requirement.
================(Build #4287 - Engineering Case #325861)================
In some rare situations a 64-bit server could have failed to correctly read
column statistics that were created in the database by a 32-bit server. The
failure could also have lead to the 64-bit server crashing. This has been
fixed.
================(Build #4288 - Engineering Case #322442)================
After a system reboot, user login or Explorer restart, the system tray icon
of the database engine did not appear immediately. The first checkpoint or
message to the server window would have caused the system tray icon to be
displayed. This has been fixed, now the system tray icon is visible immediately.
================(Build #4289 - Engineering Case #326025)================
Attempting to convert a string like 'May 1999' to a date or timestamp, would
have failed with a conversion error when the Date_order option was set to
'DMY'. The value will now be converted using a default day of 1.
================(Build #4290 - Engineering Case #323893)================
If a SELECT statement inside a procedure was of the type "SELECT * from derived_table",
the server may have crashed when trying to reuse the cursor for the statement.
This has been fixed.
Example:
create procedure GetTest1( ) as
begin
select c0 from ( select testA0 as c0 from TEST_A
union
select testA0 as c0 from TEST_A ) as ALLTEST( c0 )
end
call GetTest1() ;
================(Build #4290 - Engineering Case #326219)================
Some pages that were allocated then freed by the database server, may not
have been reused after the next checkpoint. This problem could have resulted
in the server growing the database file unnecessarily, as the file would
have been grown to create new free pages rather than using existing free
pages within the database file. However, all of the free pages would have
been recognized and reused if the server was shut down and restarted (until
the first checkpoint occurred afterwards). This problem only affected databases
created with 8.0.0 or later and which had a page size of at least 2K. It
has now been fixed.
================(Build #4291 - Engineering Case #325741)================
When the server was running on NetWare, if a string containing the '%' character
was displayed on the server console (through the MESSAGE statement or request-level
logging), the server could have displayed garbage or crashed. This has been
fixed.
================(Build #4291 - Engineering Case #326164)================
The server could have crashed when sampling trie-based indexes. For this
to have occurred, there must have been no entries in the index being sampled,
while there were rows in the underlying table. This could only have occurred
if another transaction had concurrently deleted the last row in the table,
while the sampling was in progress. This has now been fixed.
================(Build #4292 - Engineering Case #291086)================
Error checking and execution of queries with a GROUP BY clause could have
failed in many cases, resulting in failure to give an error, wrong results,
or server crashes.
A problem could have appeared any time an alias was defined in one context
and used in another (where possible contexts are inside and outside an aggregate
function, in the select list, in the WHERE clause, in the HAVING clause,
and in the ORDER BY clause).
For example, the query
select max(e), emp_id e from employee
did not return an error, although it should have because emp_id appears
in the select list but not the GROUP BY clause. Errors are also not returned
in most cases involving views or derived tables. For example, the query
select tname, List(distinct cname)
from sys.syscolumns
where tname = 'systable'
should have returned an error because tname does not appear in the GROUP
BY clause.
Problems could also have occurred when an expression was formed from GROUP
BY elements without exactly matching one, or when subselects were aliased
and used in the WHERE clause of a grouped query. In general, any grouped
query using aliases, views or derived tables was suspect. These problems
are now resolved
================(Build #4293 - Engineering Case #322474)================
If a loop in a procedure contained a "select ... into #tmp ..." to create
a temporary table as well as "drop table #tmp" and the loop iterated many
times, the server would eventually have failed with assertion 101506 - allocation
size too large when reallocating memory. This has been fixed.
================(Build #4293 - Engineering Case #323652)================
If an update affected the primary key columns for a table, more than one
row was modified, and an AFTER row-level trigger was defined on the table,
the update could have caused the server to crash. Whether or not a crash
would have occurred depended on additional factors, such as the number of
rows in the table, the access plan used to perform the updates, and the actions
within the trigger. This has been fixed.
================(Build #4293 - Engineering Case #326616)================
Executing an unsupported statements when connected to the utility_db could
have reduced the cache memory available. Executing many unsupported statements
could have caused performance degradation or even caused the server to run
out of memory. When connected to the utility_db database, statements such
as CREATE DATABASE, START DATABASE, etc are supported. Unsupported statements
give the error "Permission denied: you do not have permission to execute
a statement of this type" (-121).
This has been fixed so that available cache memory is not reduced by unsupported
utility_db statements.
================(Build #4294 - Engineering Case #326720)================
Calling the system procedure xp_startsmtp, with smtp_sender=null, could have
resulted in the server crashing. This has been fixed.
================(Build #4294 - Engineering Case #326733)================
The functions xp_startmail, xp_startsmtp, xp_sendmail, xp_cmdshell, xp_read_file,
and xp_write_file would have failed on Windows CE if character set conversion
was required between the database character set and the OS character set.
This has now been fixed.
================(Build #4294 - Engineering Case #326745)================
If a stored procedure, which opened a cursor on another stored procedure
call, was called many, many times, by an application usin jConnect or Open
Client, the result set for the second stored procedure may have been described
to the client even though no data would ever have been returned for that
result set. This problem has now been fixed.
================(Build #4294 - Engineering Case #326799)================
The MobiLink server may have generated an ODBC error, "function sequence
error", when uploading a table with blob columns.
This would only have occurred if:
- the table had a nullable column before the blob column
- an index column followed the blob column
- the data for the column before the blob column was NULL
This has been fixed, but a workaround is to make sure there are no index
columns following any blob columns.
================(Build #4295 - Engineering Case #309290)================
An UPDATE statement of the form:
update Employee e
set e.emp_fname ('x')
where:
- the column to be updated was qualified with a correlation name
- the "=" was missing
- the expression being assigned was enclosed in parentheses
would result in a server crash. An error (Set clause for column 'a' used
incorrectly) will now be reported.
================(Build #4295 - Engineering Case #326980)================
Now, when the server detects a deadlock situation, if there is a transaction
with a blocking_timeout specified, the one with the earliest deadline will
be chosen as the victim to be cancelled.
================(Build #4296 - Engineering Case #324988)================
For queries with more than one table and equijoins predicates (e.g., 'key
joins'), the error "Dynamic Memory Exhausted" may have been generated if
the server ran with a very small cache. This issue has been fixed.
An Example:
select * from systable T key join syscolumn C key join sysuserperm U
================(Build #4299 - Engineering Case #324334)================
Queries that contained a nested block join with a Work Table operator above,
could have caused the server to crash if the nested block join returned a
string column. For example, this type of plan can be selected if a hash join
is used below a nested block join. This is now fixed.
================(Build #4299 - Engineering Case #327304)================
If a user's password was changed to something containing a semi-colon (';'),
connecting as that user was no longer possible, except from a another connection
by a user with dba authority and not specifying a password. This has been
fixed - the GRANT statement will now fail if the password contains a semi-colon.
================(Build #4299 - Engineering Case #327704)================
A connection over namedpipes, which used a communication buffer size greater
than the default 1460 bytes, could have failed with a communication error,
if a request or response was larger than the default packet size. The communication
buffer size is specified with the -p server option or the CommBufferSize
(CBSIZE) connection parameter.
This has now been fixed so that communication errors no longer occur.
================(Build #4300 - Engineering Case #325437)================
Queries with many nested 'ANY (subquery)' predicates may have had a large
OPEN time, due to rewrite optimizations applied while flattening subqueries.
For this to have occurred, all the subqueries had to have been flattenable
and many equality predicates were part of the original query or could have
be inferred for some of the subqueries.
For example:
select * from t0 where t0.c2 in (select t1.c2 from t1 where t1.c2
in (select t2.c2 from t2 where t2.c2 in
(select t3.c2 from t3 where t3.c2 in (select t4.c2 from t4 where t4.c2
in (select t5.c2 from t5 where t5.c2 in
(select t6.c2 from t6 where t6.c2 in (select t7.c2 from t7 where t7.c2
in (select t8.c2 from t8 where t8.c2 in
(select t9.c2 from t9 where t9.c2 in (select t10.c2 from t10 where
t10.c2 in (select t11.c2 from t11 where t11.c2 in
(select t12.c2 from t12 where t12.c2 in (select t13.c2 from t13 where
t13.c2 in (select t14.c2 from t14 where t14.c2 in
(select t15.c2 from t15 where t15.c2 in (select t16.c2 from t16 where
t16.c2 = 30))))))))))))))))
================(Build #4300 - Engineering Case #326320)================
When a NULL constant is converted to a NUMERIC, a precision and scale of
(1,0) is now used instead of the default set by the options Precision and
Scale. This is particularly important for UNION queries such as the following:
select unit_price from product
union all
select NULL from dummy
Previously, the data type of the result would have been desribed as a NUMERIC(30,6)
(with default precision and scale settings). Now, it is described as NUMERIC(15,2),
the data type of the unit_price column. Explicit conversions to NUMERIC will
use (1,0), if the conversion does not provide a precision and scale.
For example:
SELECT cast( NULL as numeric ) A, cast( NULL as numeric(15,2) ) B
will be described as:
A NUMERIC(1,0)
B NUMERIC(15,2)
Now, the behaviour of NULL constants is more consistent with that of other
constants, where the precision and scale is selected to be as small as possible.
================(Build #4300 - Engineering Case #327432)================
For very complex WHERE clauses in disjunctive form, new IN predicates are
now generated that can be used as sargable predicates. For an IN predicate
of the form "T.X IN ( constant_1, constant_2, ...)" to be generated, it
is necessary to have in each term of the disjunction a predicate of the form
"T.X = constant_i". In the example below, query Q1 is now transformed into
query Q2, where two new sargable IN predicates are generated.
Example:
Q1:
select *
from T
where (T.X = c1 and T.Y = c2) or
(T.X =c3 and T.Y = c4) or
(T.X = c5 and T.Y = c6) or
(T.X =c7 and T.Y = c8) or
(T.X = c9 and T.Y = c10) or
(T.X =c11 and T.Y = c12)
Q2:
select *
from T
where T.X IN ( c1, c3, c5, c7, c9, c11) and T.Y IN (c2, c4, c6, c8, c10,
c12)
and T.X IN ( (c1, c2), (c3, c4), (c5, c6), (c7,c8), (c9,c10),
(c11,c12) )
================(Build #4300 - Engineering Case #328084)================
Predicates referring to undeclared host variables and string columns, were
not considered sargable. Hence, the predicates wouldn't have been used for
an index scan if an appropriate index existed.
For example:
create table T (c varchar(100) NOT NULL,
b int NOT NULL,
PRIMARY KEY (c, b) );
select ulplan( 'SELECT * from T WHERE T.c = ? and T.b = ? ');
The plan for the above query wouldn't contained an index scan, when in fact
the primary key index would have been appropriate to be used here. This is
now fixed.
================(Build #4301 - Engineering Case #327642)================
User functions that had a procedure id (proc_id in SYSPROCEDURE) that was
greater that 32k would have always returned NULL. This has been fixed, these
functions now return the correct return value.
================(Build #4301 - Engineering Case #327943)================
Before the Conjunctive Normal Form (CNF) algorithm is applied to a WHERE
clause, disjunctions of the form "X = 10 OR X = 20 OR X IN (40, 50)" are
now transformed into IN lists (i.e., X IN (10, 20, 40, 50)). Also, disjunctions
of the form "X = 10 OR X = 20 OR X=30" are also transformed into IN lists
(i.e., X IN (10, 20, 30)). These transformations reduce the number of predicates
in a WHERE clause, hence some of the rewrite optimizations (such as CNF)
that are done for WHERE clauses, may not have been done prior to this change,
and the query may now perform much better.
================(Build #4302 - Engineering Case #292648)================
If a procedure assigned a variable with the result of a subselect and then
returned a result set later on, the result set was sometimes not picked up
by the client. For example, consider the following procedure:
create procedure DBA.test()
result(xlicencas integer)
begin
declare xLicencas integer;
declare xConnection integer;
set xConnection=(select NEXT_CONNECTION(null,null));
set xLicencas=34;
select xLicencas
end
The "set xConnection=(select NEXT_CONNECTION(null,null));" line would have
caused the result set of "select xLicencas" to be dropped by the client.
This problem has now been fixed.
================(Build #4302 - Engineering Case #328051)================
Complex SELECT INTO statements executed inside a stored procedure may have
failed after the first call to the procedure. This has now been fixed, Work
arounds are:
(1) rewrite SELECT .. INTO statement as an EXECUTE IMMEDIATE statement
(2) rewrite SELECT .. INTO statement as an INSERT statement (the table inserted
into must be created first in this case)
================(Build #4302 - Engineering Case #328387)================
A query with a WHERE clause containing duplicate IN lists may have returned
incorrect results if the following conditions were true:
(1) the duplicate IN list must have been on a base or view column (e.g.,
T.X IN (2,3))
(2) the duplicate IN list must have appeared in a disjunct with another
IN list on the same column (e.g., "(T.X IN (4, 5) OR T.X IN (2,3))" )
(3) the WHERE clause was in Conjuctive Normal Form
This is now fixed.
================(Build #4303 - Engineering Case #327225)================
A query containing a UNION with a very large number of branches would have
caused an assertion failure: "101505 - Memory allocation size too large".
Now, the server no longer asserts, but instead the statement fails with an
error indicating that a syntactic limit has been reached.
================(Build #4303 - Engineering Case #328354)================
Prior to this fix, if the server added an index entry to an upgraded version
4 database, the index could have become corrupted. For this corruption to
occur, the value being inserted must not have been fully hashed (i.e. was
longer than ~9 bytes). The symptom of the corruption was that querying the
index for entries with a particular value returned an incorrect (usually
fewer) number of rows. Range searches would have produced the expected number
of rows. While the server has been fixed and the database will no longer
be corrupted, Validate Table/Index will not catch any existing corruption.
A work around would be to unload/reload to a later format database.
================(Build #4303 - Engineering Case #328444)================
Queries that contained an outer reference from a subquery to a grouped query
block would have failed with the error "Function or column reference to 'x'
must also appear in a GROUP BY".
This was not a requirement if the subquery appeared inside an argument to
an aggregate function as in the following:
select manager_id, max( (select emp_id from employee where salary=e.salary)
) from employee e
group by manager_id
This has now been fixed.
================(Build #4304 - Engineering Case #327932)================
ASA Windows applications including the server, dbremote, dbmlsync and dbmlsrv
would fail to find quoted UNC style filenames specified on the command line.
For example, passing the database file as "\\server\share\dir\my.db" would
have failed with the error "Could not open/read database file: \server\share\dir\my.db"
Attempting to use connection strings such as "...;dbf=\\server\share\dir\my.db;..."
would have succeeded only if the server myengine was already running (even
if the database my.db wasn't running), but would have failed if myengine
was not running, due to this problem.
This has been fixed so that quoted command line arguments of the form "\\server\share\etc"
no longer removes the first back slash.
Note that running a database from a file server can corrupt the database
and is generally not recommended.
================(Build #4304 - Engineering Case #328881)================
If a DTC transaction was unenlisted while another DTC action was running
the server may have crashed. This would have been seen if an application
ran concurrent DTC actions from multiple threads against the same connection.
This problem has been fixed.
================(Build #4304 - Engineering Case #329090)================
If a user attempted to insert a long binary value into a proxy table using
a
host variable, there was a chance the server would have crashed. This problem
has been fixed.
================(Build #4304 - Engineering Case #329124)================
Inserting a long binary constant into a proxy table, would have caused the
inserted value to have been corrupted. This problem has now been fixed.
================(Build #4304 - Engineering Case #329294)================
When converting a NULL constant to a string type (ie CHAR, VARCHAR, LONG
VARCHAR, BINARY, VARBINARY, LONG BINARY), the size would be initialized to
32767 if no length was provided. Now, the size is initialized to 0.
For example, the following queries would have returned a column described
as length 32767:
SELECT CAST( NULL AS CHAR )
--> now CHAR(0)
SELECT 'abc' UNION ALL SELECT NULL
--> now CHAR(3)
SELECT '' UNION ALL SELECT NULL
--> now CHAR(0)
SELECT IF 1=1 THEN 'abc' ELSE NULL ENDIF
--> now CHAR(3)
================(Build #4304 - Engineering Case #329327)================
If a server running on a Unix platform, was started as a daemon (i.e. with
-ud), then making remote ODBC connections would have failed with error -656.
This problem has been fixed.
================(Build #4304 - Engineering Case #329521)================
Attempting to insert a NULL value into a NUMERIC or DECIMAL column with "default
autoincrement" could have led to a server crash. The NULL value must have
been cast to NUMERIC. The following statement would have caused the crash:
INSERT INTO T( x ) SELECT CAST( NULL AS NUMERIC )
For other column types, there was a possibility that the value returned
from @@identity would be incorrect after inserting a NULL value. The correct
value should be 0. This problem is now fixed.
================(Build #4304 - Engineering Case #329528)================
Joins on more than one attribute, executed with merge join, would have incorrectly
returned rows where a join attribute was NULL in both tables, provided that
it was not the first attribute containing NULL. Rows with NULL join attributes
will now be rejected as NULL = NULL is Unknown, not True.
================(Build #4305 - Engineering Case #329788)================
The MIN/MAX optimization (documented in "ASA SQL User's Guide/Optimization
for minimum or maximum functions") was not being applied for queries containing
predicates with host variables (in Ultralite), or database variables (if
plan caching was used).
For example, a query of the form:
SELECT min( R.y )
FROM R
WHERE R.x = ?
should be converted to:
SELECT MIN( R.y)
FROM ( SELECT FIRST R.y
FROM R
WHERE R.x = ? and R.y IS NOT NULL
ORDER BY R.x ASC, R.y ASC ) as R(y)
(assuming that an index exists on R(x,y) ).
Further, a query in a stored procedure that used SQL variables (CREATE VARIABLE
or DECLARE) or procedure/function parameters, would not have allowed the
optimization when a query was optimized for plan caching. A symptom of this
is that a stored procedure would execute more slowly after a few initial
iterations, and a query in the procedure met the conditions for this optimization.
A work-around would be to set MAX_PLANS_CACHED=0 to prevent plan caching.
Both these problems have now been fixed.
================(Build #4306 - Engineering Case #329901)================
As of 8.0.2, pipelined versions of hash and sort operators were introduced.
This allowed the materialization of result rows from the operator to be deferred
instead of occuring immediately in the operator. For UNION based queries,
some branches of the union may require materialization while others do not.
Previously, all rows were materialized in a work table. Now, a work table
will be placed at the top of each branch that requires materialization, unless
all branches require materialization. This reduces the overhead of unnecessarily
materialized rows.
================(Build #4306 - Engineering Case #330075)================
A multi-threaded client application, which had multiple connections concurrently
using the same debug logfile (using the LogFile connection parameter) on
multiple connections, could have been missing debug messages, or possibly
have crashed. Problems were more likely on a multiprocessor machine. This
has been fixed.
================(Build #4307 - Engineering Case #325453)================
It was possible for the round() function to have returned an incorrect result.
The number returned may have been truncated, instead of rounded, if the digit
to be rounded was a 5. For example, 69.345 may have been 'rounded' to 69.34.
This has been fixed.
================(Build #4307 - Engineering Case #328939)================
An incorrect input parameter length or value could have been passed when
calling a remote procedure on an ODBC class remote server. This problem could
also have caused a server crash in rare situations. It is now fixed.
================(Build #4307 - Engineering Case #330127)================
If a TDS (jConnect or Open Client) application made a request which returned
a large amount of data (several Kb or more), and then sent a cancel request,
the server could have gone to 100% CPU usage, attempting to send the cancel
response. While the server was attempting to send the cancel response, checkpoints
or DDL would have caused all connections to hang. Fetching a large result
set with older versions of jConnect, could hang after running out of memory
and sending a cancel request, causing the engine to hang indefinitely. (Current
JConnect EBFs may get an java.lang.ArrayIndexOutOfBoundsException instead
of hanging.)
Now the server will no longer go to 100% CPU usage or cause all connections
to hang when processing a TDS cancel. If the server is unable to send the
TDS cancel response for 60 seconds, it will drop the connection.
================(Build #4307 - Engineering Case #330234)================
The TIMESTAMP_FORMAT used when one connection queried another connection's
last request time (i.e. connection_property('LastReqTime',some_other_conn_id))
would have been the other connection's format, not the current connection.
This has been fixed.
This change also fixes the case where the server could have crashed if a
connection queried another connection's last request time at the instant
that the other connection was being created.
================(Build #4308 - Engineering Case #330481)================
For a query where IN-list merging took place, followed by CNF to DNF conversion,
the result of the conversion may have been incorrect. This could only have
happened if a literal altered by the IN-list merging appeared in more than
one conjunct.
For example, in the following, the predicate N1.N_NAME = 'GERMANY' appears
in two conjuncts and the second appearance is changed to N1.N_NAME IN ('GERMANY',
'FRANCE') by IN-list merging.
(N1.N_NAME = 'GERMANY' or N2.N_NAME = 'GERMANY')
and (N1.N_NAME = 'FRANCE' or N2.N_NAME = 'FRANCE')
and (N1.N_NAME = 'GERMANY' or N1.N_NAME = 'FRANCE')
and (N2.N_NAME = 'FRANCE' or N2.N_NAME = 'GERMANY')
This has been fixed. Note that this is the same problem that was partially
fixed for 328387.
================(Build #4076 - Engineering Case #296095)================
When changing the SQL Remote Send Frequency for the consolidated user on
a remote database through the SQL Remote tab of the Database Properties dialog,
it was possible to break replication. Possible symptoms included data not
being sent from the remote, or the consolidated database continuously reporting
that there was a missing message. This has now been fixed.
================(Build #4076 - Engineering Case #299575)================
If the main Sybase Central window was partially off screen, any dialogs that
were opened may also have been partially off screen. This has now been fixed.
================(Build #4079 - Engineering Case #300375)================
When editing a BIT value and then switching to a new row the change would
be lost. This has now been fixed. This problem occurred in dbisql as well.
================(Build #4092 - Engineering Case #302925)================
Using a Database, Message Type or User property sheet to set a publisher's,
consolidated user's or remote user's address for SQL Remote, would have removed
any backslashes in the address unless they were doubled up. They needed to
be double up each time a change was made to settings on the page of the property
sheet. This has been fixed.
================(Build #4096 - Engineering Case #304202)================
On Unix platforms, when editing table data in Sybase Central, (or dbisql),
pressing the TAB (or SHIFT-TAB) key would have resulted in the focus moving
over two cells instead of just one. This has now been fixed.
================(Build #4207 - Engineering Case #306643)================
When trying to create a new service, checking the properties of an existing
service, or creating a new integrated login, it was possible to for Sybase
Central to crash if a large number of local users were defined on the system.
This is now fixed.
================(Build #4211 - Engineering Case #307991)================
When displaying data for views that referenced proxy tables, duplicate rows
would have been shown, if the connection to the database was made using jConnect.
This has been fixed so that the correct rows are now displayed.
================(Build #4214 - Engineering Case #308999)================
Clicking the Help button while in the Query Editor, would have done nothing.
It now correctly displays the help screen.
================(Build #4237 - Engineering Case #314969)================
If the Database Object Debugger was launched from Sybase Central and a connection
made, then if dbisql was also launched from Sybase Central and a connected
made, it, the Object Debugger and Sybace Central would all have hung. This
has now been fixed.
================(Build #4246 - Engineering Case #317240)================
Procedure profiling was not availble on a case-sensitive db (the menu entries
to use it were missing). This has now been fixed.
================(Build #4249 - Engineering Case #304262)================
If more than about 200 characters were specified in the parameters text area
of a service's property sheet, then the next time the property sheet was
opened, the parameters would be empty. This has been fixed.
================(Build #4253 - Engineering Case #318929)================
When run on linux, the plug-ins Help>Adaptive Server Anywhere x>Online Resources
menu item did nothing. It has been removed.
================(Build #4256 - Engineering Case #319516)================
On WindowsXP, the location numbers for the Sybase Central window were set
incorrectly when the window was maximized and closed. This has now been fixed.
================(Build #4259 - Engineering Case #304559)================
When attempting to change the list of filtered owners for a WIN_LATIN5 (Turkish)
database, the error "ASA Error -143: Column '@p0' not found" would have been
displayed. This has been fixed.
================(Build #4259 - Engineering Case #320277)================
When installed on Windows 2003, the Services folder in the ASA and MobiLink
plug-ins for Sybase Central would not be available. This has been fixed.
================(Build #4263 - Engineering Case #320974)================
When using the Erase Database wizard to erase a Write file, the Write file
would not have been displayed when browsing for it, unless the file filter
was changed to display all files. This has been fixed, the file filter now
includes Write files.
================(Build #4271 - Engineering Case #321929)================
If the value specified for a Global Autoincrement partition size in the Column
property sheet or the Domain wizard, was larger than 2^31-1, then the value
would have been ignored. Now, the arbitrarily large values are respected.
================(Build #4279 - Engineering Case #324248)================
In the User Options dialog, the Set Temporary Now button was enabled whenever
an option was selected in the list, regardless of whether the option could
actually be changed via a SET TEMPORARY OPTION statement. Now, the button
is only enabled when displaying options for the PUBLIC group or the current
user.
================(Build #4286 - Engineering Case #325396)================
The "Time_format", "Date_format", and "Timestamp_format" options were being
ignored when displaying table data with the "Data" tab. This is fixed and
these options are now respected.
================(Build #4303 - Engineering Case #328984)================
When entering dates on the "Data" tab of a database table, all dates were
assumed to be in "YMD" order, even if the "Date_order" option was set differently.
Now, the setting of "Date_order" is respected.
================(Build #4304 - Engineering Case #321934)================
In the Create Database, Restore Database, Compress Database, Uncompress Database,
Create Write File and Erase Database wizards, if the option to have the wizards
start a new local server automatically was selected, and for some reason
the server could not be started or the connection could not be established,
then the wizards would have displayed an internal error. This has been fixed.
================(Build #3994 - Engineering Case #290138)================
When running in console mode, DBISQL would not have displayed BINARY values
correctly, they would have appeared as a string similar to "[B" followed
by up to 8 hexadecimal digits. Now, they appear as "0x" followed by a string
of hex digit pairs. This problem was restricted to results printed on the
console. It did not affect the windowed mode of the program, nor was the
OUTPUT statement affected.
================(Build #4075 - Engineering Case #293018)================
When the reload.sql script, generated by the dbunload utility, was run to
rebuild a database, it may have returned a permission denied error when attempting
to grant table or columns permissions. This would have happened if the user
who originally granted the permission no longer had the appropriate permission
at unload time. This problem has been fixed. All grants for table and column
permission now use the new additional FROM {grantor} syntax and run under
the DBA user specified for the unload.
================(Build #4076 - Engineering Case #279969)================
After rebuilding a Mobilink remote database using dbunload -ar, the DBMLSync
utility would have returned the error:
No off-line transaction log file found an on-line transaction log
starts at offset ...
as dbunload did not save the current transaction log and reset the start
and relative log offset in the rebuilt database. This has been fixed.
================(Build #4076 - Engineering Case #296669)================
When calling xp_sendmail using SMTP, the first word of the body of an email
message could have been removed by some SMTP servers. The fix was to add
a carriage return/linefeed at the beginning of each body. Now this gets
stripped out instead of the text.
================(Build #4076 - Engineering Case #298299)================
Due to an off-by-one error in the logic which filtered system procedures
out of the list of procedures received from the server, the "Lookup Procedure
Name" dialog could have shown a single system procedure, depending on the
order in which procedure names were returned from the server. The first item
in the list was never subject to the filter. This is now fixed.
================(Build #4076 - Engineering Case #298383)================
The Import Wizard could have reported an internal error if, on the second
page, the "Use an existing table" radio button was selected and the name
of a nonexistent table was entered, and then went on to the import the file.
Now, users are not allowed to leave the second page if the table does not
exist.
================(Build #4076 - Engineering Case #298423)================
The configured caret color used by the syntax highlighting editor in DBISQL
was being ignored. This meant that the caret color would remain the system
default caret color (the color you'd see used for the caret in Notepad, for
example). Now the configured color is used.
================(Build #4076 - Engineering Case #298807)================
Providing connection information on dbisql's command line, (in either the
argument to the "-c" option or in the SQLCONNECT environment variable), would
have caused it to fail to connect if all of the following were true:
1. the "-odbc" option was used, and
2. connection was to the default local server, and
3. no data source was specified a in the connection parameters
This is now fixed.
.
In this case, DBISQL would open the "Connect" dialog from which you could
successfully log in.
================(Build #4076 - Engineering Case #299058)================
Tilde (~) characters were not allowed in unquoted file names. Attempting
to use them caused the error "Lexical error". This problem has been fixed.
================(Build #4076 - Engineering Case #299301)================
Running the dbunload utility with options -ar or -an would not have created
a reload.sql file that had check constraints turned off. This may have caused
errors when loading data into the new database. This is now fixed.
================(Build #4076 - Engineering Case #299548)================
Pressing the F8 key while DBISQL was busy executing a statement, would have
caused it to report an internal error. This has been fixed.
================(Build #4078 - Engineering Case #299030)================
After rebuilding an ASA primary database using dbunload -ar, running the
Replication Agent would fail with the following error:
Unable to find log offset ...
Error processing log due to missing log(s)
Dbunload did not save the current transaction log and reset the start and
relative log offset in the rebuilt database. This has been fixed.
================(Build #4078 - Engineering Case #299754)================
Executing a statement which contained two consecutive comments, and the first
comment contained an unmatched quotation mark, and the second comment contained
a question mark, would have caused dbisql to report the error: "JZ0SA: Prepared
Statement: Input parameter not set, index: 0".
For example, the statement below would not have executed:
CREATE PROCEDURE test AS
BEGIN
-- "a
-- ?
END
This problem, which is related to issue 217915, has been fixed.
================(Build #4078 - Engineering Case #300115)================
When executing multiple consecutive
MESSAGE ... TYPE WARNING TO CLIENT
statements, dbisql may not have displayed the messages in the correct order.
For example, when executing the following compound statement:
begin
message 'one' type warning to client;
message 'two' type warning to client;
end
message "two" would have displayed before the message "one". This problem
has been fixed.
================(Build #4087 - Engineering Case #300125)================
When reading a script file which contained an EOF character, (0x1A), DBISQL
would have reported a "Lexical Error" rather than interpreting the character
as an end-of-file marker. Now the end-of-file character is recognized correctly.
Note that the end-of-file character is optional in script (.SQL) files.
================(Build #4092 - Engineering Case #302917)================
When writing preserved syntax to the reload.sql file, no newline was added
between the end of the object syntax and the 'preserve format' statement.
This meant that if a procedure, trigger, view or event handler was created
with a trailing comment, an error would occur when trying to rebuild the
database. While this is now fixed, a workaround for the problem is to avoid
placing a comment at the end of the object.
================(Build #4093 - Engineering Case #302368)================
DBISQL would have reported an internal error, if an INPUT statement was executed
which read a FIXED format file and all of the following were true:
- The table contained more than one column
- The INPUT statement contained a COLUMN WIDTHS clause
- Fewer column widths were given in the COLUMN WIDTHS clause than there
are columns in the table
- The INPUT statement did not explicitly list the table columns
================(Build #4097 - Engineering Case #304242)================
Some long messages displayed by dbisql, (or Sybase Central), would not have
been properly word wrapped when running on Unix platforms. Instead, the
entire message would appear on one line which would be truncated. Long messages
now word wrop correctly.
================(Build #4106 - Engineering Case #305487)================
The Import Wizard could have failed to import a file if all of the following
conditions were true:
- data was being imported into a new table (i.e. the table did not exist
before running the Import Wizard),
- the file being imported was ASCII, FIXED, or SQL,
- the name of a column was changed from the defaults ("Column1", "Column2",
etc.)
The wizard would have failed with a message saying that Column1 (or whatever
the original column name was) could not be found. This problem has now been
corrected.
================(Build #4108 - Engineering Case #305817)================
Pressing the ESCAPE key while a menu was open did not close it. Now it does.
================(Build #4108 - Engineering Case #305871)================
If an attempt is made to execute a SQL statement after disconnecting from
a database, dbisql will attempt to reopen the connection and execute the
statement. This reopening of the connection would have failed if the connection
auto-started the . That is, if a DSN, FDSN, or DBF connection parameter was
given. This has now been fixed.
================(Build #4108 - Engineering Case #305936)================
Clicking the window close button, (the title bar button with an "X" in it),
would not have closed the dialogs listed below:
- the dialog which reports errors in SQL statements
- the data prompt dialog used for the INPUT ... PROMPT statement.
This has been fixed now so that the dialogs close.
================(Build #4119 - Engineering Case #308349)================
The QueryEditor would not have allowed an ORDER BY for the case were a table
was being joined to itself and the same column from each aliased table was
being used in the ORDER BY. This has now been corrected.
For example:
SELECT "m"."emp_lname", "e"."emp_lname"
FROM "DBA"."employee" AS m JOIN "DBA"."employee" AS e
ON "m"."emp_id" = "e"."manager_id"
ORDER BY "m"."emp_lname" ASC , "e"."emp_lname" ASC
================(Build #4120 - Engineering Case #307174)================
When connected to a database via the JDBC-ODBC bridge, empty strings in LONG
VARCHAR columns were displayed as NULL, rather than empty strings. This has
been fixed.
This problem affected the ASA plug-in for Sybase Central as well.
================(Build #4120 - Engineering Case #308346)================
Table names were being duplicated in the table expression lists on the Joins
page of the Query Editor. Table names should now only be listed once.
================(Build #4123 - Engineering Case #306179)================
When used on the dbping command-line, the -o switch was not being parsed
correctly. Therefore, instead of sending its output to a file, dbping -o
would display the usage message. This has been corrected.
================(Build #4206 - Engineering Case #306135)================
It was difficult to enter the following types of numbers as values when editing
a table in the "Results Pane":
- Negative numbers (e.g. "-123")
- Real numbers that began with a decimal point (e.g. ".123")
- Numbers in exponential notation (e.g. "1.23e2")
For example, typing the number -123, would have caused a beep when the "-"
was typed and would not have allowed it to be entered in the editing box
for the table data. A workaround for this was to type the number without
the minus signt, then add the minus sign after. This bug affects ASA plug-in
for Sybase Central as well. This is now fixed in both dbisql and Sybase Central.
================(Build #4209 - Engineering Case #307381)================
Selecting the "Insert spaces" option in the editor's "Customize" dialog,
would have been ignored, and would have been set back to "Keep tabs". This
has been fixed.
================(Build #4210 - Engineering Case #307751)================
If when executing a .SQL file using the READ statement, the file contained
a CONNECT statement where the user name was a parameter to the .SQL file,
and it was enclosed in double-quotation marks, the CONNECT statement would
have failed.
For example, if the following statements were in a file called TEST.SQL:
PARAMETERS user_name;
GRANT CONNECT TO "{user_name}" IDENTIFIED BY "{user_name}";
GRANT DBA TO "{user_name}";
CONNECT USER "{user_name}";
and were executed by:
READ TEST.SQL [test]
an error would have occurred that the user "USER" did not exist. This has
been fixed.
A workaround for the problem is to remove the quotation marks from the CONNECT
statement.
================(Build #4211 - Engineering Case #307239)================
Specifying a File DSN as part of the connection string could have caused
any of utilities to crash. This has been fixed.
================(Build #4212 - Engineering Case #298001)================
When calling a SQL procedure that declared an external java method which
produced a result set, dbisql when using the JDBC-ODBC bridge would have
reported an invalid cursor operation. This problem is now fixed.
================(Build #4212 - Engineering Case #308507)================
When connected to a database via the JDBC-ODBC bridge, the "Auto commit"
option was implicitly on all the time, so that a commit was being done after
every statement. This has been fixed so that commits are now implicitly done
only if the "Auto_commit" option is "On".
This problem did not affect connections that used jConnect.
================(Build #4213 - Engineering Case #308693)================
When the name of a file was entered in the "Database File" field on the "Database"
page of the "Connect" dialog, and then the ESCAPE key was pressed while the
"Database File" field still had focus, an internal error would have occurred.
This has been fixed.
This problem also affected the ASA plug-in for Sybase Central, the Stored
Procedure Debugger, and DBConsole.
================(Build #4214 - Engineering Case #308847)================
If an attempt to connect failed, an error dialog with a "Show Details" button
appeared. Clicking this button shows a summary of the connection parameters
used in the attempt. Prior to this fix, the user name and password were
displayed twice and the password was shown in clear text. Now the user name
and password appear only once and the password is displayed as a series of
asterisks.
================(Build #4215 - Engineering Case #309181)================
Inserting a new row into a table, (via right-clicking Results pane a selecting
"Add"), which contained a BIT column, would have inserted the wrong BIT
value in the following were true:
- if the BIT value had a default of "1", and
- if the BIT value was not explicitly set by the user, and
- connected using jConnect.
This has been fixed. This bug affected the table "Data" panel in the Adaptive
Server Anywhere plug-in for Sybase Central as well.
================(Build #4215 - Engineering Case #309350)================
When trying to view the table data for a proxy table, if the data could not
be retrieved because the userid or password for the remote database was no
longer valid, the "Add" (row) toolbar button and context menu items were
still enabled. This was inappropriate, and could have caused an internal
error if clicked.
This has been fixed; the toolbar buttons and menu items in the context menu
are now properly enabled.
================(Build #4216 - Engineering Case #308031)================
As an ASE compatibility feature, ASA was zero-padding binary strings for
TDS connections, (ie jConnect). Since the ASA behaviour did not match ASE
behaviour, (ASA zero-padded nullable strings, whereas ASE zero-padded not
null strings), and zero-padding gave the impression that a zero-padded binary
value was equivelent to a non zero-padded binary value in ASA, when in reality
it was not, the zero-padding feature has now been removed. ASA will no longer
zero-pad binary for a TDS connection.
================(Build #4223 - Engineering Case #310887)================
DBISQL would have reported an internal error, if it was opened from Sybase
Central, (Right-click on the database icon and click "Open Interactive SQL"),
and a number of statements were executed all at once, followed by an EXIT
statement.
For example:
CREATE VARIABLE value INT;
SET value = 1;
EXIT value
It did not matter what the statements did, just that there were more than
one, and that the last one was an EXIT.
This problem has been fixed.
================(Build #4225 - Engineering Case #311584)================
If any any of the following conditions were true, dbisql, (or any of the
administration tools), could have failed to connect to a database:
- the "Connect" dialog was being used
- the JDBC-ODBC bridge was being used
- a value in the "Start line" field contained a blank.
The exact error message varied, depending on what other connection parameters
were specified and whether the database server was already running or not.
This problem has been fixed. A workaround is to connect using the jConnect
driver instead of the JDBC-ODBC bridge. Note, this was a problem in the administration
tools, not the JDBC-ODBC bridge per-se.
================(Build #4225 - Engineering Case #311704)================
When editing table data in the "Results" panel of DBISQL an internal error
could have been reported if an existing cell was being edited and an invalid
value for the cell was entered, or a new row was added and invalid data was
entered in one or more of the cells,
and then a different row was selected by clicking with the mouse or by pressing
the UP or DOWN keys. An error message about the invalid data would have ben
displayed, and the row selection would change. Attempting to go back to the
row with the bad data and correct it would then have caused an internal error.
This problem has been fixed. Note, this problem also applied to the ASA plug-in
for Sybase Central.
================(Build #4230 - Engineering Case #311508)================
Binary data imported from an ASCII formatted file would not have been loaded
correctly if the data appeared as an unquoted sequence of hex digits, prefixed
by "0x". (This is the format generated by DBUNLOAD and DBISQL's OUTPUT statment.)
This has now been fixed.
================(Build #4239 - Engineering Case #315262)================
When opening a query in dbisql, parts of some of the query may have had the
quotes stripped off. The query editor's parser was being too agressive when
parsing parts of a SQL statement. This is now fixed.
================(Build #4239 - Engineering Case #315415)================
When attempting to reload a database with dbisqlc, it could have failed with
an error that it could not open a data file that was greater than 2GB in
size. This problem has been corrected. A workaround is to use dbisql (Java
version); however, dbisql is not available on deployment platforms.
================(Build #4240 - Engineering Case #309620)================
If a file selected via the "File/Run Script" menu item, was in a directory
starting with the letter "n" (e.g. "c:\new\test.sql"), and the platform was
Windows, a file not found error would have occurred. This has now been fixed.
================(Build #4240 - Engineering Case #312950)================
The following problems, all related to the way DBISQL handled the DBKEY connection
parameter, have been fixed.
- DBKEY values which contained number sign ('#') characters were being mangled.
This would have prevented connecting to an encrypted database.
- if the DBKEY parameter was specified on the DBISQL command line, but there
was not enough information to connect, the Connect dialog was opened, but
the Encryption Key field was not filled in. Instead, the DBKEY value appeared
in the Advanced Parameters field.
================(Build #4246 - Engineering Case #317112)================
The QueryEditor was not qualifying tables with the owner name, which lead
to problems when there were multiple tables with the same name. Now it uses
the users id if it matches one of the tables owners, otherwise it guesses
and picks the last table found with a matching name.
================(Build #4246 - Engineering Case #317268)================
The QueryEditor was not parsing the ORDER BY clause correctly,if the query
had a HAVING clause. This is now fixed.
================(Build #4248 - Engineering Case #317638)================
The reload.sql script created by dbunload was not quoting user names that
apppeared in CREATE SYNCHRONIZATION USER statements, causing a rebuild to
fail if the username contained spaces or consisted of numbers only. This
has been fixed.
================(Build #4249 - Engineering Case #317930)================
The Query Editor was producing bad SQL when given a query of the form:
SELECT *
FROM a join b,
(c join d ) join e
The parser was not processing the comma properly when it was followed by
a bracket. This has now been fixed.
================(Build #4250 - Engineering Case #317962)================
The UNCONDITIONALLY keyword of the STOP ENGINE statement was not being handled
correctly by dbisql. When executing a STOP ENGINE statement, a STOP ENGINE
UNCONDITIONALLY statement was actually sent to the database server, and when
executing STOP ENGINE UNCONDITIONALLY, a STOP ENGINE statement was sent to
the database. This has now been fixed.
================(Build #4250 - Engineering Case #318315)================
After connecting using jConnect, attempting another connection to a different
server would have failed, if a different port was used. This problem is now
fixed.
================(Build #4251 - Engineering Case #318320)================
Viewing data from a proxy table which referenced a table which no longer
existed, would have caused dbisql to report an internal error. This problem
appeared only when connecting using the ASA JDBC-ODBC bridge. Note, this
same problem affected the "Data" details panel in the Sybase Central ASA
plug-in as well. Both are now fixed.
================(Build #4254 - Engineering Case #319165)================
Updating VARBINARY and LONG BINARY values in the Results panel of dbisql,
or the Data tab of the Sybase Central Plug-in, was not being allowed. This
has been fixed. As well the Table Editor is now more lenient of what can
be typed in for BINARY, VAR BINARY and LONG BINARY values. The usual syntax
of "0x" followed by pairs of hex values is still supported. However, anything
else will now be sent to the server as a string which the server will convert
into a BINARY value.
================(Build #4259 - Engineering Case #320357)================
Using the Embedded SQL syntax, the statements:
EXEC SQL GET DESCRIPTOR sqlda :hostvar = DATA
or
EXEC SQL SET DESCRIPTOR sqlda DATA = :hostvar
could have failed to copy the correct amount of data if the host variable
type was DECL_DATETIME, DT_TIMESTAMP_STRUCT, DECL_LONGVARCHAR, DT_LONGVARCHAR,
DECL_LONGBINARY or DT_LONGBINARY. The SQLDATETIME structure used by the TIMESTAMP_STRUCT
structure is 14 bytes on platforms which do not require alignment, and 16
bytes on those that do. Code with the length of the structure appropriate
for the current machine was being generated by sqlpp. Now sizeof(SQLDATETIME)
is used in the generated code instead.
================(Build #4261 - Engineering Case #320708)================
The QueryEditor and the Expression Editor did not fit on the screen when
the resolution was set to 800x600. These windows have been resized to fit
an 800x600 screen.
================(Build #4263 - Engineering Case #320912)================
Computed columns that only consisted of "owner"."table"."column" were being
truncated to just "column" in the Query Editor. This problem has been fixed.
================(Build #4263 - Engineering Case #320929)================
An internal error would have been reported if:
- dbisql was running on a Windows operating system which was configured
to use a multi-byte character set (MBCS), such as Japanese or Chinese, and
- an OUTPUT TO ... FORMAT EXCEL statement was executed, and
- an exported string, which was less than 255 characters, had a MBCS encoding
which was greater than 255 bytes.
This problem has been fixed.
================(Build #4263 - Engineering Case #320960)================
It was not possible to open the Expression Editor to edit an ON condition
using the keyboard, it required double-clicking the cell with the right mouse
button. It was possible to TAB to the cell, but the keystrokes for editing
(F2 or space) were being ignored. This is now fixed.
================(Build #4268 - Engineering Case #321611)================
If when using the QueryEditor to create a derived table, a column was aliased
with a reserved word (eg "from"), the generated SQL did not quote the alias
in the list of columns for the derived table. This has been fixed so that
now it does.
================(Build #4269 - Engineering Case #321834)================
When a START ENGINE statement was executed in dbisql, the quotation marks
from the engine name parameter was not stripped off. For example, the following
statement would start an engine called 'Test' rather than Test.
START ENGINE AS 'Test'
Now, the quotation marks do not appear as part of the started engine's name.
================(Build #4272 - Engineering Case #322326)================
When connected to a database which used the Turkish (1254TRK) collation,
DBISQL exhibited a number of problems. These have been fixed, and are listed
below:
- If the server was started with the "-z" option, and the connection was
via jConnect, debugging information from DBISQL was displayed in the server
window. (It shouldn't be.)
- Connecting to an authenticated 1254TRK database was not possible.
- If connecting via jConnect, DATE, TIME, and TIMESTAMP column types were
all displayed as TIMESTAMP and were not in the appropriate format.
- The list of options listed by the SET command was incomplete
================(Build #4273 - Engineering Case #322363)================
Lists of objects were not being sorted according to the rules for the current
locale. Now they are.
The affected lists were:
- The list of tables in the "Lookup Table Name" dialog
- The list of columns in the "Select Column" dialog
- The list of procedures in the "Lookup Procedure Name" dialog.
- The list of options displayed by the SET command
- The combo box which contained the list of columns in the Import wizard
- The list of database servers in the "Find Servers" window (opened from
the "Connect" dialog)
- The list of ODBC data sources in "Data Source Names" window (opened from
the "Connect" dialog)
================(Build #4273 - Engineering Case #322449)================
When dbisql was run in console mode, (ie -nogui), the option Truncation_length
was being ignored. This has been fixed.
================(Build #4276 - Engineering Case #323206)================
When an off-line transaction log directory was not specified, log scanning
tools on CE would have failed to find the off-line logs if they were in the
root directory. This problem has been fixed.
================(Build #4277 - Engineering Case #323519)================
If the definition of a procedure, trigger, view or event exceeded 512 characters
and the definition contained a right brace ("}"), the preserved-format source
would have been truncated without warning or error by the dbunload utility.
The complete definition will now be output.
================(Build #4279 - Engineering Case #322435)================
Running dbisqlc in quiet mode, (ie with -q option), could have caused it
to crash when it was attempting to generate an error dialog. This has been
fixed.
================(Build #4279 - Engineering Case #323981)================
Unloading a database with the -ar or -an command line options on dbunload
could have resulted in a new database being created containing mangled data
and object names. For this to have occurred, the operating system character
set must be different from the database character set on the machine where
the unload was being performed. These dbunload options are used to simply
make a new database with all of the same options as the original database.
As a result, the fix is to no longer do character set translation when these
switches are used.
================(Build #4279 - Engineering Case #324235)================
If a database which had no user DBA (i.e. a REVOKE CONNECT FROM DBA had been
done) was unloaded and reloaded using dbunload with the -ar or -an command
line option, or the Sybase Central Unload wizard with a new database file
specified, then the new database would have had a user DBA and the user that
had been used to do the unload would have had the password SQL. This is now
fixed.
================(Build #4284 - Engineering Case #324989)================
Starting dbisqlc on a Windows machine with a connection string that specified
a character set different from the OS character set (e.g. "cs=cp1251"), would
have caused it to crash on shutdown. This is now fixed.
Note, any executable that used the Unicode translation library, (dbunic9.dll
on 9.0.0 or libunic.dll otherwise), and which also loaded and unloaded the
DBLIB or ODBC client libraries could have encountered this problem.
================(Build #4286 - Engineering Case #325032)================
When run on Windows machines, certain graphical objects could have been inadvertently
displayed underneath the Windows task bar. Note that this problem was also
present in Sybase Central, the Database Object Debugger, and DBConsole.
These objects included:
- The splash window
- Context menus
- Property sheets, dialogs, wizards
- the editor window for stored procedures and dialogs opened from the editor
The problem was especially noticeable if the taskbar was docked to the left
or top edge of the screen. This has been fixed.
================(Build #4287 - Engineering Case #325734)================
The QueryEditor keeps a statement open so that sample results can be displayed.
As the query is changed the statement is closed, recreated, and executed.
The statement was being left open, leaving the table locked. Now, when the
QueryEditor's dialog has been closed the statement is also closed.
================(Build #4289 - Engineering Case #325668)================
The dbunload utility was not adding the CLUSTERED keyword to the CREATE INDEX
statement in the reload.sql file for clustered indexes, clustered primary
keys, and clustered foreign keys. This has been fixed.
================(Build #4289 - Engineering Case #326072)================
If a database which had no user DBA (i.e. a REVOKE CONNECT FROM DBA had been
done) was unloaded and reloaded on Unix using:
- dbunload -ar or -an or
- the Sybase Central Unload wizard and a new database file was specified
then the new database would have had a user DBA and the userid used to do
the unload would have had the password SQL.
With builds 7.0.4.3469, 8.0.1.3121 or 8.0.2.4279 higher, the error "Invalid
user ID or password" would be displayed. Earlier builds did not display
any error.
This has now been fixed.
================(Build #4294 - Engineering Case #320085)================
Attempting to run dbisql -nogui on Unix platforms not running X, would have
failed with an internal error. This has been fixed, dbisql -nogui can now
be run without the need for an X server.
================(Build #4300 - Engineering Case #323132)================
If dbisqlc cannot finds it's component dlls in the current directory or by
searching the path, it will access the registry to find the ASA installation
directory. It was accessing the registry with KEY_ALL_ACCESS permission,
which may have caused access errors. Now KEY_QUERY_VALUE is used.
================(Build #4303 - Engineering Case #326406)================
Rebuilding a 5.x database, using either dbunload or Sybase Central, could
have failed with a syntax error, if the option Non_keywords was set. This
has been fixed.
================(Build #4303 - Engineering Case #328376)================
DBISQL could have reported an internal error if there was more than one DBISQL
window open, and one of the windows was closed by clicking its close icon
in the title bar, while DBISQL was executing a statement, but was blocked.
This is now fixed.
================(Build #4305 - Engineering Case #326388)================
If the command delimiter option was set to something other than the default,
and the delimiter appeared in a quoted filename, the filename would have
been truncated. Quoted filenames are used in the following statements: INPUT,
OUTPUT, READ, START DATABASE, START LOGGING, and STOP DATABASE. This has
been fixed.
The workaround for this problem is to not change the command delimiter
option.
================(Build #4307 - Engineering Case #330440)================
Under certain circumstances, dbisqlc could have crashed when run on HP-UX
on Itanium. For example: if the ESC key was pressed while in the Connection
dialog box. This has been fixed.
================(Build #4076 - Engineering Case #293340)================
On case sensitive databases, it was possible that the Mobilink Plugin would
insert the incorrect password into the ml_user table. This has now been fixed.
================(Build #4079 - Engineering Case #300404)================
Specifying -mn "" on the dbmlsync commmand line would have caused it to crash.
This has been fixed.
================(Build #4083 - Engineering Case #301293)================
A MobiLink client on a Pocket PC 2002 device, using ActiveSync synchronization
over an ethernet connection, would have failed to find the ActiveSync provider
on the desktop. This is now fixed.
================(Build #4085 - Engineering Case #301415)================
When -d is specified on the dbmlsync command line, dbmlsync attempts to drop
any connections that are preventing it from locking tables for synchronization.
If the database engine was slow dropping the connections, synchronization
would fail with a message like:
SQL statement failed: (-210) User '<username>' has the row in '<tablename>'
locked
In this situation dbmlsync now tries for longer than it did in the past
so failures of this type should be much more infrequent.
================(Build #4087 - Engineering Case #296325)================
When dbmlsync is sleeping, if its top-level window receives a window message,
registered using the name "dbas_synchronize", will wakeup and perform a synchronization.
If dbmlsync is already performing a synchronization, the message will be
ignored. This feature has been available on CE platforms since version 8.0.0,
now it is available for all win32 platforms.
================(Build #4087 - Engineering Case #302304)================
A new feature has been added such that when the dbmlsync top-level window
receives a window message registered using the name "dbas_synchronize" while
sleeping, it will weakup and perform a synchronization. If dbmlsync is already
performing synchronization, the message will be ignored.
================(Build #4088 - Engineering Case #302224)================
Dbmlsync was leaking TLS indexes and memory on the first failed MobiLink
connection attempt. The leak did not grow for subsequent connection failures
throughout the process. This is now fixed.
================(Build #4088 - Engineering Case #302400)================
A bug has been fixed that could have caused any of the following problems
with dbmlsync when running on Windows CE:
1) Crashes
2) Hanging
3) failure to add messages to the message log
================(Build #4092 - Engineering Case #303141)================
When dbmlsync was run using the DBSynchronizeLog entry point in the dbtools
DLL, a crash could have resulted if an 8.0.2 or later version of the dbtool8.dll
was used with an older version of the a_sync_db structure. The a_sync_db
structure is used to pass parameters to the DBSynchronizeLog entry point.
This problem has been fixed.
================(Build #4097 - Engineering Case #303920)================
When the progress offset in the consolidated database differs from that in
the remote database, the MobiLink server will send the consolidated database
progress offsets to the remote and ask that it synchronize again, and then
waits. However, if the requested offset was too small (old logs had been
deleted), or too big (log offset had not been generated), or the offset was
in the middle of a log operation, DBMLSync would display an
error, ignore the synchronization request and abort the synchronization.
Then the MobiLink server would generate the error "Unable to read from the
'tcpip' (or 'http') network connection". Now if DBMLsync cannot find the
requested offset, it sends the MobiLink server an empty upload stream (an
upload without any actually user data), plus an error message. The MobiLink
server will display this error message and abort the synchronization.
================(Build #4099 - Engineering Case #304715)================
If errors occurred during upload DELETEs (for example, deleting rows that
do not exist in the consolidated database) and the MobiLink server was running
in a multiple-row mode (the command line switch -s X was used with X > 1
or no -s was specified), it may not have updated the progress offset in
the consolidated database, but it would have informed the client to update
the progress offset in the remote database. Therefore, the progress offset
in the consolidated database would have been smaller than that in the remote
database after the synchronization. In the next synchronization, DBMLsync
would have complained with the error "mismatch progress offset" and then
uploaded the transactions that have been uploaded in previous synchronization
again, if all the previous transaction
log files were still available. However, if "Delete_old_logs" was on in
the remote database, the previous transaction log files may have been deleted,
and then DBMLsync would have complained with "missing transaction log(s)
before file log_file_name". These problems are now fixed.
================(Build #4106 - Engineering Case #305630)================
When run against a database initialized with a Turkish collation sequence,
dbmlsync would have failed to initialize the TCPIP communication stream.
The error returned would be: "Invalid communication type: 'tcpìp'". TCPIP
should now initalize correctly.
================(Build #4120 - Engineering Case #306920)================
If the following conditions were all true:
- the -x switch was specified on the dbmlsync command line
- the progress value on the remote was behind the progress value on the
consolidated
- the LockTables extended option was set to 'off'
then dbmlsync would have failed to generate an upload and report the following
error :
"No log operation at offset of n in the current transaction log" The number
n would be the ending log offset of the current transaction log. This has
been fixed.
================(Build #4201 - Engineering Case #305568)================
Since 8.0.0 we have shipped dbtool8.dll and dbmlsync.exe for CE. The dbtool8
DLL contained most of the logic for dbmlsync and the executable just did
command line processing, then called the DLL. The dbmlsync executable is
now linked against a static library containing the same code that goes into
dbtool8.dll, resulting in an executable that no longer depends on the DLL
and consumes about 240K less memory.
The dbtool8 DLL for CE will continue to be shipped, as it is required by
dbremote and can be used through the dbtools interface to programatically
access dbmlsync's functionality
================(Build #4203 - Engineering Case #305469)================
In dbmlsync, when the sp_hook_dbmlsync_download_end hook was called, an extra
entry with the name 'continue' was being added to the #hook_dict table.
The value for this entry was alway FALSE. The entry was unintentional, undocumented
and has now been removed.
================(Build #4204 - Engineering Case #305634)================
When building the upload stream dbmlsync may have displayed a message box
with the caption "Assertion Failure" and the text "File 'mergeupd.c' at line
#873. Try to enter debugger?", (the line number in the text of the message
might have varied). This message would have been reported if the extended
option SendTriggers was set to 'off' (this is the default), a row that belonged
to one of the publications being synchronized was deleted, then inserted
inside of a trigger then deleted again. This has now been fixed.
================(Build #4205 - Engineering Case #305093)================
Dbmlsync would have incorrectly deleted rows from the download stream which
had NULL foreign key values, if all of the following were true:
- table T contained a foreign key and one or more of the columns of row
R involved in the FK was NULL.
- another row, R2, existed in the download stream that dbmlsync had to delete,
due to a legitimate RI violation, involving the same foreign key in which
R had a NULL value.
This behaviour has been corrected.
================(Build #4208 - Engineering Case #305784)================
When building an upload, if dbmlsync encountered a certain sequence of operations
for a row being uploaded, it could have uploaded an incorrect operation on
that row, created an invalid upload stream, or displayed a message box with
the caption "Assertion Failure" and the text "File 'mergeupd.c' at line #873.
Try to enter debugger?". The line number in the text of the message would
have varied.
The sequences that could have caused this problem were, (although it might
not reproduce consistently):
- insert a row R, stop synchronization delete, delete row R, start synchronization
delete, insert a row with the same primary key as R
- if the dbmlsync extended option "SendTriggers" was off, insert a row R,
delete row R inside of a trigger, insert a row with the same primary key
as R
- if the dbmlsync extended option "SendTriggers" was off, insert a row R,
update row R inside trigger so that it belonged to no publication, update
row R so that it again belongs to the publication
This is now fixed.
================(Build #4209 - Engineering Case #303136)================
When running on Windows CE dbmlsync would sometimes have failed a synchronization
with the error message "Cannot load dbsock8.dll". Other DLL's, such as dbhttp8.dll,
could also have reported as have failed to load. Once the error occurred
it was likely to have occurred consistently until the device was reset. A
new switch has been added to resolve this issue. The syntax of the new switch
is -pd <dllname><dllname>.... For example
-pd dbsock8.dll
-pd dbsock8.dll;dbhttp8.dll
To resolve the problem identify the communication protocols to be used by
dbmlsync during this synchronization then use the -pd switch to preload the
DLL's used for those protocols as follows
tcpip dbsock8.dll
http dbhttp8.dll
https dbhttps8.dll
We STRONGLY encourage everyone using dbmlsync on CE to use this switch even
if they have not encountered the error.
================(Build #4221 - Engineering Case #308527)================
If a database was initialized with the blank padding option, (dbinit -b),
it was impossible for dbmlsync to hover at the end of the transaction log.
This has now been fixed.
================(Build #4228 - Engineering Case #312389)================
When disconnecting from a MobiLink server, the ASA client, dbmlsync, may
have crashed, hung, or have issued a spurious error at the end of a synchronization.
This is now fixed.
================(Build #4229 - Engineering Case #312536)================
The next available value for a GLOBAL AUTOINCREMENT column could have been
set incorrectly in the following situation:
- an INSERT into the table was executed using a value for the column outside
the range for the current setting of Global_database_id. This could happen
if rows from other databases were downloaded as part of a synchronization.
- the server was shutdown abnormally (e.g. by powering off) before a checkpoint
occurred.
The next time the database was started, the next available value for the
column would have been set incorrectly. If an INSERT was then attempted with
no value provided for the column, the error "Column '<col>' in table '<table>'
cannot be NULL" would have been reported. If the database was again shut
down and restarted, an INSERT into the table would have attempted to use
the first available value in the range for the current setting of Global_database_id.
Note that if rows had been deleted from the table, this could result in a
value being generated which had previously been used and which might still
exist in another database. Resetting the value using sa_reset_identity()
would correct the problem, assuming an appropriate value to use can be determined.
This problem has now been fixed.
================(Build #4232 - Engineering Case #313503)================
When the dbmlsync command line contained a site or publication name that
encoded differently in the remote database, the synchronization would have
failed. The problem is now fixed.
================(Build #4232 - Engineering Case #313603)================
If a sitename contained characters from a Multi-Byte Character Set, dbmlsync
would have failed to find the subscription or perhaps crashed. This problem
is now fixed.
================(Build #4245 - Engineering Case #310577)================
If dbremote was running in hover mode or if dbmlsync was running on a schedule,
and either process was running with the -x switch to rename and restart the
transaction log, then after the log was renamed for the first time, neither
dbremote nor dbmlsync would have sent changes to the consolidated database
until the process was restarted. Now, if the online log is renamed when in
hover-mode, the data in the last page is processed and then the log-scanning
process is shutdown and restarted.
================(Build #4263 - Engineering Case #320659)================
If a remote database contains only one publication and dbmlsync is running
in scheduling synchronization, dbmlsync will run in hover-mode. Whenever
errors occur in an upload, dbmlsync should completely shut down the log scanning
layer and restart it again. However, this would not have occurred. This problem
is now fixed.
================(Build #4207 - Engineering Case #306609)================
The following applies to UltraLite synchronization on the Palm using the
HotSync conduit. When no synchronization stream parameters were specified
by the application in the ul_synch_info passed to PalmExit, the conduit will
check the registry as documented for parameters to use. If the registry contains
no parameters, a default of tcpip to localhost is used.
Previously, specifying no synchronization stream parameters required setting
ul_synch_info.stream_parms to NULL. Now, a setting of NULL or an empty string
will cause the registry to be checked for parameters to use. If an application
needs to force default parameters (overriding the registry), it should set
ul_synch_info.stream_parms = "stream=tcpip;host=localhost";
================(Build #4208 - Engineering Case #315130)================
Extra debug logging to the HotSync or ScoutSync log can now be enabled for
the Palm conduit by setting environment variables.
UL_DEBUG_CONDUIT: When this variable is set, a message box will pop up on
conduit loading/unloading to confirm entry into the conduit. Requires user
intervention to proceed.
UL_DEBUG_CONDUIT_LOG: When set to 1, basic logging will be written to the
log, possibly including synch parametrs, registry locations and attempts
to load libraries. When set to 2, basic logging as well as more detailed
IO logging will be written to the log
================(Build #4076 - Engineering Case #299231)================
Attempts to connect to more than one server via jConnect would have failed,
if the servers were running on the same machine. The error message would
report that the database could not be started. This has been fixed.
================(Build #4227 - Engineering Case #312114)================
Table lists in the Test Script's Options dialog sometimes didn't reflect
the current setting of the order of tables for testing. Synchronization tables
could have disappeared in both the 'Synchronized Table Order' list and the
'Synchronization Tables' list and thus would have been unable to be selected
for testing.
For example, if a few tables were added to the 'Synchronized Table Order'
list, OK was clicked in the Options dialog and then the Options dialog was
opened again, the selected tables would all applear in the 'Synchronization
Tables' list and the tables that were previously in 'Synchronizaton Tables'
would have disappared.
This problem has been fixed.
================(Build #4245 - Engineering Case #315382)================
When modifying a MobiLink password for a user in the user's properties page,
the password would have been updated to the wrong value. This has now been
corrected.
================(Build #4263 - Engineering Case #320976)================
When connecting to a database, the plug-in made a second superfluous connection.
When disconnecting, this second connections was not closed. This has been
fixed, the second connect is no longer made.
================(Build #3602 - Engineering Case #301739)================
On Unix platforms, using File Save As in the MobiLink Monitor would have
written the file to the initial directory displayed in the dialog, instead
of the last directory chosen. If the user did not have write permissions
in that directory, then the Monitor would have given an inappropriate "file
... could not be found" message. Also, on both Windows and Unix the Monitor
would not have added an extension, if there was a period anywhere in the
file path. So an error message would have resulted, if the file name did
not end in either .csv or .mlm, and you could override the file type specified
in the dialog by typing the other file extension as part of the file name.
These problems have been fixed. The MobiLink Monitor now saves to the expected
directory for both Windows and Unix. Trying to save to a directory without
write permission on Unix, now gives a "file ... could not be accessed" message.
For both Windows and Unix, the Monitor now adds the extension chosen in the
Save dialog, if it is not already at the end of the file name.
================(Build #4076 - Engineering Case #299246)================
With the UltraLite Java runtime (ulrt.jar) that did not include secure stream
classes from Certicom, attempting to use the HTTPS protocol would have given
a stack trace. This has been fixed. The HTTPS option is now disabled unless
the UltraLite Java runtime includes the secure stream classes from Certicom.
================(Build #4076 - Engineering Case #299440)================
When using large fonts with Windows,, the MobiLink Monitor's New Watch and
Edit Watch dialogs could have truncated text in the Remove button and in
the time units combo box, if a time-based property was selected. This has
been fixed.
================(Build #4083 - Engineering Case #301081)================
Dialogs in the MobiLink Monitor deviated from the Windows Interface Guidlines
in the following ways, which affected keyboard accessibility:
- The Watch Manager, New Watch and Edit Watch dialogs did not have default
buttons.
- OK, Cancel and Help buttons had mnemonics. Windows applications do not
use mnemonics for these buttons because there are standard keyboard shortcuts
(Enter for OK, Esc for Cancel and F1 for Help).
- Due to a JDK 1.3 bug (fixed in JDK 1.4), the default button would be the
last button to have the focus.
These problemhave been corrected.
================(Build #4095 - Engineering Case #303735)================
If MobiLink Monitor was started from the Start menu or from a command prompt
in a root folder (such as C:\) then the default autosave file name would
have had double slashes (such as C:\\autosave.mlm). Also, the Browse dialog
would have opened in the user's home folder, rather than with the file specified
in the text box. These problems have been fixed.
================(Build #4215 - Engineering Case #308830)================
Command line options have been added to the MobiLink Monitor so that it can
open a file or connect to a MobiLink server on startup. This allows for file
associations to be setup for MLM files or have the Monitor connect to a MobiLink
server, then close and save results to a file when disconnected (either from
the GUI or from the the MobiLink server shutting down). The following command
will show the options: dbmlmon -?
================(Build #4245 - Engineering Case #316700)================
The MobiLink Monitor window would always show a title of "Started - MobiLink
Monitor" after saving a file or using the View/Go To command. This has been
fixed.
================(Build #4255 - Engineering Case #319257)================
Starting the MobiLink Monitor with command-line option /? would have shown
an error opening /? as a file, instead of showing the usage. This has been
fixed.
================(Build #4259 - Engineering Case #320578)================
A new option, (under Tools>Options>General), "Prompt to connect on startup"
has been added to open the Connect dialog on startup if no command line options
are specified. The default is for the option to be on.
================(Build #4260 - Engineering Case #320657)================
The Watch Manager was missing mnemonics for lists boxes. These have now
been added.
================(Build #4268 - Engineering Case #321594)================
The list of tables in the Synchronization Properties were not being sorted.
Now they are
================(Build #4268 - Engineering Case #321664)================
The Monitor was sorting the worker threads as strings. Now they are sorted
numerically by the stream number, then numerically by the thread number.
================(Build #4269 - Engineering Case #321844)================
The list of columns in the table did not reverse the sort order if the header
was clicked more than once, as is standard. Now when a column header is clicked
more than once, the sort order alternates between ascending and descending.
================(Build #4290 - Engineering Case #326172)================
When connected to a MobiLink server, and displaying the chart in By User
view, synchronizations could have been shown in the wrong rows when new rows
were added (unless they happened to be added in alphabetical order). A workaround
to fix the chart is to change to By Worker Thread then back to By User. This
problem has been fixed.
================(Build #4302 - Engineering Case #328353)================
If a Monitor session had enough users to cause a vertical scrollbar to be
shown in the chart in the 'By User' view, then the overview outline would
not have been the correct height when the view was changed to 'By Sync'.
This has now beed corrected.
================(Build #4302 - Engineering Case #328466)================
When attempting to save data to an mlm file, if there was insufficient disk
space, an error message would have been displayed, and a zero lemgth file
would have been created. If an existing file was being overwritten, the
existing file would be replaced by a zero length file. Now, if there is
insufficient space to write an mlm file, the error message is displayed and
nothing is written to disk.
================(Build #4305 - Engineering Case #329774)================
The First, Last, Next, and Previous menu items were enabled, but had no effect
if the table was disabled. They are now disabled if the table is disabled.
================(Build #4306 - Engineering Case #329924)================
When a second file or session was opened, the chart pane would have remained
scrolled at the same position it was at for the first file or session. Similarly,
when the Monitor was started, the chart pane did have a scrollbar, but when
a session was closed, the scrollbar remained visible. Now, when a second
session is opened the chart pane is scrolled to the beginning, and when a
session is closed, the scrollbar disappears.
================(Build #4088 - Engineering Case #304584)================
Starting the Mobilink server with two instances of an HTTP based stream could
have caused the stream to behave erratically. This is now fixed.
For example:
dbmlsrv8 -c ... -x http {port=22222} -x http {port=88888}
================(Build #4091 - Engineering Case #302923)================
When the stream connection string contained an invalid value for the "version"
option (ie. anything other than "1.0" or "1.1"), or an invalid value for
the "content_type" option (ie. anything other than "fixed" or "chunked"),
the stream would have leaked a few bytes of memory. It was possible for synchronizations
to succeed, even though the options are specified incorrectly. Both the HTTP
and HTTPS streams were affected and are now fixed.
================(Build #4093 - Engineering Case #303298)================
Synchronizations with HTTP 1.1 and nonpersistent connections through a proxy
server could have failed. This is now fixed.
================(Build #4098 - Engineering Case #304478)================
When connecting to a busy Mobilink server using HTTP or HTTPS, the communication
error 65 (Unable to connect a socket) could have occurred. A failed connect
attempt caused an error condition to be set that prevented retries. This
has been corrected.
================(Build #4107 - Engineering Case #305477)================
Database scripts that took longer then the 'contd_timeout' setting could
have caused the HTTP link to time-out a connection, which would have caused
the synchronization to fail. This has been fixed.
================(Build #4255 - Engineering Case #319387)================
On Win32 platforms, a client may have failed when closing a tcpip or http
connection, with a system error 10093. This has now been fixed.
================(Build #4273 - Engineering Case #322620)================
In a successful synchronization through ECC, RSA, or HTTPS, the stream error
STREAM_ERROR_WOULD_BLOCK would have been reported, even though the sqlcode
was SQLE_NOERROR. The secure streams no longer report this error.
================(Build #4282 - Engineering Case #324845)================
The connection between the ISAPI redirector and MobiLink could have timed
out during data synchronization, resulting in the data synchronization failing.
This was fixed by keeping the connection open. A work around is to increase
the system timeout interval, (default is 4 minutes), by setting the following
registry key (value is in seconds):
\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpTimedWaitDelay
================(Build #3604 - Engineering Case #302966)================
The Mobilink server, when using the HTTP or HTTPS stream, could have crashed
on shutdown. On Windows, this would have left the Icon in the Task Bar. This
is now fixed.
================(Build #3605 - Engineering Case #303302)================
Synchronizations using HTTP through a proxy server would have failed, if
the proxy server changed the content encoding of the message body from fixed
length to chunked. This is now fixed.
================(Build #4076 - Engineering Case #298595)================
The MobiLink server could have failed with an ODBC function sequence error
when a zero length blob was inserted or a statement-based update was issued,
with blobs that were NULL or zero length. This has now been fixed.
================(Build #4076 - Engineering Case #298604)================
On Unix, with the option -v+, Mobilink could have output some wrong SQL statements
in the log file when Mobilink logged native SQL statements. When doing the
conversion between multibyte and Unicode characterss, a NULL termination
character was missed. This is now fixed.
================(Build #4076 - Engineering Case #299081)================
When using ASA as a consolidated database for MobiLink, if any of the tables
to be synchronized were located on a remote server (ie proxy tables), the
MobiLink server would have encountered the following error:
"Feature 'remote savepoints' not implemented"
and failed the synchronization. This case is now detected and silently
worked around.
================(Build #4080 - Engineering Case #299055)================
You no longer need DBA authority to connect the Mobilink Server to an ASA
database.
================(Build #4080 - Engineering Case #300502)================
If errors occurred during the Prepare_for_download and/or Download events,
the MobiLink server would have commited the upload on the server side quietly,
without notifying the client. The ASA client would the have displayed the
error "communication error..." and rolled back the upload. In the next run,
dbmlsync would again have complained with "Progress offset mismatch, resending
upload from the consolidated database's progress offset". This problem is
now fixed.
================(Build #4120 - Engineering Case #307385)================
The MobiLink server could have crashed on shutdown. This was unlikely to
have affected synchronizations. It has now been fixed.
================(Build #4123 - Engineering Case #307388)================
When a second MobiLink monitor attempted to connect, it would have received
an "unable to read N bytes" error. It now will get an error indicating that
another user is monitoring.
================(Build #4123 - Engineering Case #307615)================
The MobiLink option -vh dumps out the schema, but it didn't display the column
names, it substituted the name with Column #n. This option has been improved.
For clients that don't send the column names to MobiLink, the schema logging
will be the same as before; but for clients that do send the column names,
MobiLink will now substitute the column names in place of Column #n.
================(Build #4201 - Engineering Case #305551)================
A new command line switch, "-vp -- show progress offsets", has been added
to the MobiLink server. With this command line switch, (-v+ may also be
used), the MobiLink server will log the consolidated and remote progress
offsets in its output file for each publication in every synchronization.
The publications may include those that are explicitly involved in the current
synchronization, as well as the ones that are not explicitly involved, but
are in the same remote database. If the consolidated progress offsets do
not match the remote progress offsets, the MobiLink server will print these
offsets in its output file, no matter whether this switch is used or not.
================(Build #4201 - Engineering Case #315473)================
A new command line option -vp (or -v+), "show progress offsets", has been
added to the MobiLink server. With this newoption, the MobiLink server will
log the consolidated and remote progress offsets in its output file for each
publication in every ynchronization. The publications may include those
that are explicitly involved in the current synchronization, as well as the
ones that are not explicitly involved in the currently synchronization, but
in the same remote database. If the consolidated progress offsets do not
match the remote progress offsets, the MobiLink server will also print these
offsets in its output file, no matter whether -vp or -v+ is used or not.
UltraLite sequence numbers (ie. ml_user.commit_state on the consolidated
side) are also shown.
================(Build #4206 - Engineering Case #306131)================
When a table is added to a Mobilink consolidated database, it is assigned
a unique tableid. Scripts created to work on this table will use this tableid
when loaded from the database. If the tableid had a value greater than 65535,
it was possible that the script would not have loaded, or the wrong script
would have been loaded. This has been corrected.
================(Build #4213 - Engineering Case #289423)================
If the begin_connection synchronization script caused an error, and the handle_error
event returned a value of 4000, indicating that the Mobilink Server should
be shutdown, it would not have been shut down. This has been fixed and the
MobiLink server should shutdown as expected.
================(Build #4213 - Engineering Case #308639)================
When the server running a remote database encountered an error during download,
and the MobiLink server was expecting a download acknowledgement, there would
have been extra communication errors in the MobiLink log. The first error
would have been "Download failed with client error -NNN", and then spurious
communication errors would have followed. This has now been fixed.
================(Build #4218 - Engineering Case #309512)================
The error and warning counts passed to the following scripts:
upload_statistics
download_statistics
synchronization_statistics
could have been wrong. The most likely wrong value was one that was non-zero
when it should have been zero. Less likely, the counts could have been smaller
than expected. Similarly, the error and warning counts displayed by the MobiLink
Monitor could also have been wrong. This has ben fixed.
================(Build #4230 - Engineering Case #312932)================
Using statement-based scripts could have caused an ODBC error, if the -zd
switch was not used when starting the MobiLink server. The upload_new_row_insert
and upload_old_row_insert events were affected by this. This has now been
fixed.
================(Build #4237 - Engineering Case #314861)================
The MobiLink server may have crashed when executing an ODBC statement that
had been previously executed. This would only have occurred if the script
happened to be identical for two different tables. This was most likely to
have occurred when using download_delete_cursor and the truncate table feature,
where the script was typically "select NULL". One workaround is to add some
comment text to the script to make it unique. This problem has now been fixed.
================(Build #4239 - Engineering Case #315475)================
The syncase125.sql script, which sets up MobiLink system tables, was not
updated when 8.0.2 was released. It is now fixed, but can be updated manually
by ading the following to the end of the file:
exec sp_procxmode 'ml_add_user', 'anymode'
exec sp_procxmode 'ml_add_table_script', 'anymode'
exec sp_procxmode 'ml_add_lang_table_script', 'anymode'
exec sp_procxmode 'ml_add_java_table_script', 'anymode'
exec sp_procxmode 'ml_add_dnet_table_script', 'anymode'
exec sp_procxmode 'ml_add_connection_script', 'anymode'
exec sp_procxmode 'ml_add_lang_connection_script', 'anymode'
exec sp_procxmode 'ml_add_java_connection_script', 'anymode'
exec sp_procxmode 'ml_add_dnet_connection_script', 'anymode'
go
================(Build #4245 - Engineering Case #315662)================
If MobiLink called a user implemented .NET script and the script had an incorrect
signature (argument list), then the MobiLink server would have shut down.
The MobiLink server now aborts the synchronization only and continues to
accept connections.
================(Build #4253 - Engineering Case #302620)================
MobiLink synchronization was not working for the following three Japanese
characters when used with a Japanese database (collation 932JPN):
0x8160 Wave Dash
0x8161 Double Vertical Line
0x817c Minus Sign
The has been fixed.
================(Build #4272 - Engineering Case #322076)================
When connecting to the Synchronization Server with Version 9.0 client software,
the server's worker thread could possibly have hung. This has been corrected,
but as a work around, the worker thread can be recovered by killing the client
process.
================(Build #4272 - Engineering Case #322165)================
When the MobiLink server was run with the command line option -sl Java, it
may have hung on shut down. This would have been more likely to have happened
when AWT support was invoked. This problem is now fixed.
Another problem was that the server may have failed to report the list of
non-daemon threads being stopped due to a shutdown. Now the complete list
is reported as an error so that user may implement their shutdown code correctly.
================(Build #4272 - Engineering Case #322199)================
The Synchronization Server may not have responded to tcpip requests by default.
This has been fixed, but a workaround is to specify tcpip explicitly using
the option -x tcpip( ... )
================(Build #4279 - Engineering Case #316036)================
When the MobiLink Server was running against ASE, if the ASE server was shut
down for some reason and restarted, the MobiLink server would have kept running,
not knowing the ASE server had been restarted. The MobiLink server will now
detect that the ASE server has been restarted and reconnect.
================(Build #4283 - Engineering Case #323844)================
Synchronizations would have failed reporting an error like the following:
SQL statement failed: (-194) No primary key value for foreign key '<role
name>' in table '<fk table name>'
if the remote database was blank padded and the download stream contained
one or more rows that violated foreign key constraints on tables in the publication
being synchronized. Now dbmlsync will go through its normal RI cleanup procedures
in this case. These procedures generally result in the rows that violate
the foreign key constraints being deleted.
================(Build #4289 - Engineering Case #325868)================
Starting the evaluation version of the MobiLink server would have showed
the 60-day evaluation notice and waited for user input. Pressing 'Enter'
would have caused the notice to have been re-displayed again. The evaluation
screen message that asks the user to hit 'Y' or 'Yes' to accept the license
agreement, was not being displayed. This is now fixed, but a work around
is to enter 'Y' or 'YES' to confirm acceptance of the evaluation license
agreement.
================(Build #4290 - Engineering Case #326150)================
The MobiLink server may have crashed when using cursored uploads. This is
now fixed, but a workaround is to disable the statement cache (-hwC+) in
versions 8.0.0 and later. No workaround is available for earlier versions.
================(Build #4301 - Engineering Case #328180)================
Some of the download scripts take an optional timestamp parameter (last_download_timestamp).
This timestamp can be modified by creating a modify_last_download_timestamp
script which is called before the download scripts. The unmodified timestamp
was being passed to the connection scripts, now the modified timestamp is
used. Table scripts were not affected.
================(Build #4307 - Engineering Case #330441)================
The -zt command line option for dbmlsrv was mistakenly shown in the usage
displayed on Unix platforms. This switch is only available on Windows and
now is only shown when usage is displayed when running on Windows.
================(Build #4093 - Engineering Case #302789)================
When an ActiveSync synchronization was initiated, the desktop provider would
wait 30 seconds for a response from the client and would fail if it didn't
receive it. This timeout value can now be set using a new -t flag when running
dbasinst to register the ActiveSync provider. Here is a description of the
-t flag from dbainst's new usage:
-t<n> the desktop provider should wait n seconds for a
response from the client before timing out; the
default is 30
dbasinst stores the timeout value in the registry under "HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\MobiLink\ActiveSync
Synchronization\Connection Timeout".
================(Build #4092 - Engineering Case #302822)================
If code executed in a user implemented shutdown listener called addShutdownListener()
or removeShutdownListener() during shutdown, an exception could have apeared
in the MobiLink output log and some shutdown listeners may not have been
called at shutdown time. This is now fixed.
================(Build #4114 - Engineering Case #306344)================
When calling the Mobilink system procedure ml_add_table_script on DB2 with
a "null" script parameter value, it would have failed with the message: SQL4302N
Java stored procedure or user-defined function "DB2ADMIN.ML_ADD_TABLE_SCRIPT",
specific name "SQL021223140150845" aborted with an exception "[IBM][CLIDriver]
CLI0115E SQLSTATE=38501". Note, the name "SQL021223140150845" may be different
on different systems. Both ml_add_connection_script and ml_add_table_script
had this bug, which has been fixed.
================(Build #4120 - Engineering Case #307194)================
If a JAVA or .NET handler for the report_error event was defined and it used
parameters and no script was defined for the handle_error event, Mobilink
would report an error similar to: "Not enough arguments to execute script:
"moderr.report_error". 5 arguments are needed only 0 are available.". This
is now fixed.
================(Build #4099 - Engineering Case #304512)================
When an offline transaction log ended with a zero-byte string for the last
valid log operation, SQL Remote could have reported "Log operation at offset
X0 has bad data at offset X1" (where X1 > X0 ), if this last log operation
was on the last log page, or "Transaction log file file_name1 overlaps with
log file file_name2", if this last log operation was not on the last log
page (log files contain one or more unused pages).
This problem could have happened in DBMLSync and DBLTM as well. The problem
has been fixed.
================(Build #4206 - Engineering Case #304766)================
If the connection that was used for processing incoming messages was dropped,
SQL Remote
for ASA would have gone into an infinite loop display the error: "SQL statement
failed: (-101) Not connected to a database". The errors SQLE_CONNECTION_TERMINATED
and SQLE_NOT_CONNECTED were not handled properly. This problem is now fixed.
================(Build #4275 - Engineering Case #322897)================
If the starting log offset X (the minimum value of the log_sent column in
the sysremoteuser table for dbremote and the minimum value of the progress
column in the syssync table for dbmlsync) in the current replication/synchronization
was greater than the redo_begin of the on-line transaction log, any log operations
with log offsets less than X in transactions that started in the offline
transaction logs and committed at the offsets greater than X could have been
skipped by dbremote and/or dbmlsync. This problem has been fixed.
================(Build #4250 - Engineering Case #297714)================
When dbremote (or ssremote) encounters an error when trying to delete a file
using the FILE based message link, the delay between the failure and the
next attempt to delete the file can now be set using the "unlink_delay" message
control parameter for the FILE based message link.
For example :
SET REMOTE file OPTION "public"."unlink_delay" = '10';
Dbremote will pause 10 seconds before attempting to delete the message again.
The number of attempts made to delete the file (five) has not changed.
================(Build #4285 - Engineering Case #322583)================
Dbremote would have reported an error if the FTP server returned the error
code '450' in response to the NLST command on an empty directory. Dbremote
now treats this as an empty directory.
================(Build #4303 - Engineering Case #328956)================
If the minimum log_sent value in the SYSREMOTEUSER table pointed to the middle
of an operation in the transaction log, then dbremote would still have sent
data starting at the beginning of the transaction pointed to by the bad offset.
Now, dbremote will return an error (No log operation at offset X) in this
situation.
================(Build #4077 - Engineering Case #299786)================
The UltraLite analyzer was generating a macro that was not defined, when
processing a query that used the STR() function with the last 2 (optional)
parameters omitted. The analyzer has been changed so that it generates the
defined macro with the default parameters supplied, if they are not specified.
A workaround is to add the macro:
#define UL_STR_CHAR_DOUBLE( l, dst, s1 ) \
ULStr( UL_STRING(dst), s1, (ul_s_long)10, (ul_s_long)0 )
to the user?s ulprotos.h file. This workaround only works if both optional
parameters to the function are omitted.
================(Build #4209 - Engineering Case #308453)================
For Palm applications, the ULSEGDB segment output by the analyzer contains
row management code for all tables. If an application had a large number
of tables, the ULSEGDB segment may have exceeded the maximum size of 64k.
A workaround when this occurs is to edit the generated file and insert extra
segments manually.
The analyzer now outputs optional segment divisions within ULSEGDB code
which are enabled individually by defining a corresponding preprocessor symbol.
There is an optional segment division at each table. The segments are named
ULSTn where n is an integer (the table number), and each is activated by
defining UL_ENABLE_SEGMENT_ULSTn.
For example, suppose an application has 5 tables. The generated code will
resemble:
start ULSEGDB segment
start ULST1 segment (if UL_ENABLE_SEGMENT_ULST1 defined)
<code for first table>
start ULST2 segment (if UL_ENABLE_SEGMENT_ULST2 defined)
<code for second table>
...
start ULST5 segment (if UL_ENABLE_SEGMENT_ULST5 defined)
<code for fifth table>
Defining only UL_ENABLE_SEGMENT_ULST4 will leave tables 1 through 3 in ULSEGDB
but put tables 4 and 5 into ULST4.
Extra segments should be enabled as required to avoid exceeding the segment
maximum size. Note also this assumes generated code segments have been enabled
by defining UL_ENABLE_SEGMENTS (and UL_ENABLE_GNU_SEGMENTS if applicable).
In CodeWarrior, these #defines should be added to the prefix file; for PRCTools,
use the -D option.
================(Build #4269 - Engineering Case #321682)================
Attempting to run ulinit or ulgen against an ASA reference database with
a Turkish collation, eg 1254TRK, would have caused the error ?Table ?sysarticle?
not found?. This has been fixed.
================(Build #4273 - Engineering Case #321970)================
When a query made use of an index on a column that allowed NULLs, it was
possible that the UltraLite analyzer would have generated code that referenced
the identifier ULConstantNull_ANY. Since this constant was not defined, the
application would consequently not compile. This has been fixed.
================(Build #4080 - Engineering Case #300534)================
The analyzer could have generated code after a generated return statement.
This code was only defined for the Palm platform and it was defined to free
memory. This has been fixed.
================(Build #4087 - Engineering Case #298914)================
The UltraLite analyzer would have generated code for some queries that declared
the same variable name twice. This has been fixed.
================(Build #4218 - Engineering Case #307354)================
Under certain rare conditions, the "Get" functions generated by the C++ API
would have been missing code fragments, resulting in compilation errors and/or
erroneous results. This has now been fixed.
================(Build #4251 - Engineering Case #317934)================
When the -x command-line option was used with ulgen, some of the filenames
generated with #include statements used absolute file names. This has now
been changed to instead generate relative file names.
================(Build #4214 - Engineering Case #308329)================
This change works around a problem in Codewarrior 6 and 7, where they would
sometimes not handle compiling an application with an empty segment. This
problem would have lead to an error while running the application, such as
"Application has just read from low memory... ". A fix was put into a Codewarrior
7 patch, which was included in all versions of Codewarrior 8 and 9. The empty
segment in the Codewarrior 7 Certicom runtime has been removed.
================(Build #4272 - Engineering Case #322340)================
Disabling the ActiveSync provider after a synchronization would have caused
ActiveSync to hang.
================(Build #4077 - Engineering Case #299822)================
Connection.getTable(), TableSchema.getOptimalIndex() and TableSchema.getIndex()
would have returned SQLE_CLIENT_OUT_OF_MEMORY when the table or index requested
could not be found. Now SQLE_INDEX_NOT_FOUND or SQLE_COLUMN_NOT_FOUND is
correctly returned.
================(Build #4079 - Engineering Case #300214)================
Calling TableSchema.isColumnGlobalAutoIncrement() with an invalid column
name would have caused a crash. It now throws a SQLException with SQLE_COLUMN_NOT_FOUND.
================(Build #4299 - Engineering Case #327668)================
Attempting to delete all rows matching a certain criteria, assuming there
was an appropriate index, may have missed some rows or deleted the wrong
rows.
For example:
t.findBegin();
<specify search criteria using t.set*(...)>
t.findFirst();
t.delete();
while( t.findNext() ) {
t.delete();
}
would have skipped some rows because delete() modified the search criteria.
If there were an odd number of rows matching the search criteria, this approach
would have deleted rows that did not match the search criteria.
Table.delete(), Table.truncate(), and Table.deleteAllRows() have now been
changed so as to cancel all edit and search modes.
================(Build #4076 - Engineering Case #299222)================
Under certain conditions, an empty result set was not produced when GROUP
BY was used and no rows were obtained. The one-row result set contained
meaningless values. This has been fixed.
================(Build #4076 - Engineering Case #299224)================
Under certain conditions, an empty result set was not produced when GROUP
BY was used and no rows were obtained. The one-row result set contained
meaningless values. This has been fixed.
================(Build #4097 - Engineering Case #303926)================
As of version 8.0.2 the cipher used to encrypt the data stream was changed
from Certicom_tls to ECC_tls. For the Mobilink server, both names were kept
for backwards compatibility, however this was not done for Ultralite. The
documentation currently reads:
"For Hotsync and Scoutsync, the stream parameters need to be specified in
the stream parameters in much the same way as for Adaptive Server Anywhere
MobiLink clients. The format is:
security=cipher{ keyword=value;... }
where cipher must be certicom_tls... "
This should have read as:
"where cipher must be ecc_tls..."
For backwards compatibility, ecc_tls can also be specified as certicom_tls.
================(Build #4105 - Engineering Case #306301)================
When using multiple threads with UltraLite (and hence multiple connections),
rows could have been lost or not visible, the database could have appeared
corrupt, or synchronization could have failed. The incorrect connection number
could have been used in any given call. This has been fixed.
================(Build #4200 - Engineering Case #304950)================
During a ULSynchronize call, the ul_synch_info output fields were not necessarily
set. This would only have been a problem if ULInitSynchInfo was not called
before each synchronization (because ULInitSynchInfo clears all output fields),
but the documentation did not explicitly state this was a requirement. Now
all output fields are always set, so it's possible to call ULSynchronize
twice without an intermediate ULInitSynchInfo call if desired.
================(Build #4209 - Engineering Case #308450)================
In version 8.0.1, the UltraLite schema information output by the analyzer
was extended to support additional features, including sending column names
to Mobilink. This resulted in more data segment usage by the generated code,
possibly exceeding the Palm limitation of 64kb. The extended schema information
for columns may now be omitted to decrease data segment usage. To enable
this feature, define the preprocessor symbol UL_OMIT_COLUMN_INFO before compiling
all generated files. (In CodeWarrior, this #define should be added to your
prefix file; for PRCTools, use the -D option.)
Note: when this feature is enabled, the following features are not available
and must not be used:
- send_column_names synchronization parameter
- schema upgrades (applies to 8.0.2 and later)
================(Build #4215 - Engineering Case #309299)================
When the database store was set as in-memory (ie no persistent file), Java
UltraLite would have experienced corrupted temp table when doing updates.
This is now fixed.
================(Build #4220 - Engineering Case #310304)================
The documentation incorrectly states that when using the C++ API, storage
parameters can be passed to the UltraLite runtime either by using the UL_STORE_PARMS
macro or by passing them directly to the ULData::Open method. Actually,
using the ULData::Open method is the only way parameters can be passed.
================(Build #4276 - Engineering Case #323200)================
If two cursors were pointing to the same row and they both did a positioned
delete, the second delete would have undone the first, and the row would
have remained unchanged. If the second cursor did an update instead, the
original deleted row would have come back and the new version of the row
would have been added to the table as an entirely new row.
Now, if the second cursor does a delete or update on a row that another
cursor has just deleted, the operation will fail and SQLE_NOTFOUND will be
returned.
================(Build #4276 - Engineering Case #323203)================
Executing a DELETE WHERE CURRENT OF cursor statement could have caused a
future FETCH RELATIVE 0 to actually move the cursor ahead one row. This
has been fixed.
================(Build #4285 - Engineering Case #325039)================
Certain Unicode characters would have incorrectly compared as equal, possibly
resulting in a corrupt index, if they differed only in the high byte. For
example, an index containing a string column with these characters could
have failed to find rows previously inserted, resulting in invalid index
entries after deleting rows. This has been fixed.
================(Build #4286 - Engineering Case #325533)================
When SQLPP was supplied with a user-specified collation sequence, with a
name that differed from an ASA standard collation sequence, when an Ultralite
database was being generated, nothing would have been generated and the error
message:
Cannot generate UltraLite collation sequence for <name of collation>
would be displayed. This was corrected.
================(Build #4299 - Engineering Case #327823)================
It was possible for an autoincrement (or global autoincrement) default on
a numeric column to overflow and cause other values in the row to have been
corrupted. This hasnow been fixed.
================(Build #4234 - Engineering Case #314230)================
Under certain conditions, the SQL preprocessor (or ULGEN) would have created
incorrect UltraLite code for queries which used an index in which there were
columns ordered as DESCENDING. This resulted in no rows being retrieved
for the query. This has now been fixed.
================(Build #4286 - Engineering Case #325782)================
When SQLPP was supplied with a user-specified collation sequence file that
did not exist, the SQLPP utility would have crashed. This is now fixed.
================(Build #4112 - Engineering Case #306120)================
When creating a column in the UltraLite schema painter, if the default value
chosen was not compatible with the column datatype chosen, the schema painter
would have issued an uninformative error message. It now indicates that
the column type and the default do not match.
================(Build #4112 - Engineering Case #306122)================
When creating an index with the UltraLite Schema Painter, if an index name
was specified that already existed, the existing one was over written without
warning.ÿ The index creation dialog now prevents a user from creating an
index with the name of an existing index.
================(Build #4112 - Engineering Case #306204)================
The UltraLite Schema Painter would not have properly updated the icons for
columns in the primary key, if the operation was cancelled.
For example:
- Edit an existing table
- Alter the primary key by adding a column to it
- Cancel the editing of the table
Refreshing the application would make it appear as though the column that
was added to the primary key (but should have been discarded) was in fact
in the primary key. This has been fixed.
================(Build #4115 - Engineering Case #306245)================
When altering a primary key for a table in the UltraLite Schema Painter,
if the user pressed Cancel and then went to alter the key again, the columns
that were added but cancelled appear to be back in the primary key. This
has been fixed.
================(Build #4122 - Engineering Case #308327)================
UltraLite schemas created by the Schema Painter, or ulxml utility, would
have had their case-sensitivity reversed. For case sensitive databases,
when case should have been respected, it was ignored, and for case insensitive
database, when case need not have been respected it was. This has been corrected.
================(Build #4209 - Engineering Case #307579)================
When attempting to drop a table, with the UltraLite schema painter, on a
newly created schema file could have failed with SQLCODE 0. Attempting to
drop the table again, would have properly dropped the table. This is now
fixed so that the table is dropped on the first attempt.
================(Build #4212 - Engineering Case #308131)================
The UltraLite Schema Painter allows schema files to be read and stored as
XML files. Loading a schema in XML format and saving it and then attempting
to load another XML file (without shutting down the schema painter) would
have caused an error message saying the file didn?t exist or was invalid.
This is fixed.
A workaround would be to shut down the Schema Painter after saving an XML
file and start it up again to load another file.
================(Build #4214 - Engineering Case #308815)================
The UltraLite Schema Painter would have given a warning if an attempt was
made to create a table with a name that was already in use. Clicking OK
on the error message and providing a new name for the table would have caused
the Schema Painter to crash. This is now fixed.
================(Build #4219 - Engineering Case #310190)================
The collation info in schema files (.usm), required to sort Unicode characters,
was corrupt. The schema files generated by ulview, ulinit or ulxml, all showed
the same problem. The effect of this was that Unicode strings with non-ascii
characters (ie Unicode values >= 128 ) may have had unexpected results; values
may have sorted incorrectly, and upper/lower case conversions may have been
incorrect. This has now been corrected.
================(Build #4242 - Engineering Case #315241)================
The Schema Painter was unable to open a user defined XML file that contained
a publication with a large number of tables. When the user tried to open
this schema file,
the Schema Painter failed, returning an error indicating that the publication
could not be created because a table did not exist. The table name displayed
in the error sometimes contained a full table name, but always contained
various garbage characters. This is now fixed.
================(Build #4244 - Engineering Case #316472)================
When using the Schema Painter on Windows 9x, a conversion from usm to xml
produced a corrupt xml file that would not then open in the Schema Painter,
and could not have been converted back to a usm file. This has now been
fixed.
================(Build #4283 - Engineering Case #324932)================
Dropping tables or adding foreign keys could have resulted in an invalid
schema. Also the Schema Painter was not properly detecting foreign key cycles,
which could have caused problems during synchronizations. MobiLink requires
that parent tables be synchronized before child tables and this condition
wasn't being guaranteed by the schema created in the Schema Painter. Both
of these problems have now been fixed.
In order to fix an invalid schema, use the (fixed) ulxml tool or Schema
Painter to write the schema out to as an XML file, then reload it back in.
The process of reloading it will correct the table order.
================(Build #4224 - Engineering Case #311194)================
When a table was opened with an index, after column objects for that table
had been referenced, may have thrown the error SQLE_METHOD_CANNOT_BE_CALLED.This
has now been fixed.
================(Build #4253 - Engineering Case #318855)================
If a ULConnection object?s Close method was called before a ULColumn or ULIndexSchema
object was released, Visual Basic would have crashed. This would have happened
if a ULColumn or ULIndexSchema object was declared globally for instance.
This has been fixed.
================(Build #4253 - Engineering Case #318868)================
Attempting to call methods on a ULIndex object would have resulted in incorrect
errors. This would have happened if the index was set before the table was
opened, and then the table was opened and closed.
For example:
Set t = Conn.GetTable(?T?)
Set idx = t.Schema.GetIndex(?idx?)
MsgBox ?Name = ? & CStr(idx.Name) ? this is OK
t.Open
t.Close
MsgBox ?Name = ? & CStr(idx.Name) ? this would fail
A similar problem existed with column objects (instead of index objects).
This has been fixed, so now the last line of example will succeed.
================(Build #4299 - Engineering Case #327728)================
Attempting to delete all rows matching a certain criteria, assuming there
was an appropriate index, may have missed some rows or deleted the wrong
rows.
For example:
t.findBegin();
<specify search criteria using t.set*(...)>
t.findFirst();
t.delete();
while( t.findNext() ) {
t.delete();
}
would have skipped some rows because delete() modified the search criteria.
If there were an odd number of rows matching the search criteria, this approach
would have deleted rows that did not match the search criteria.
Table.delete(), Table.truncate(), and Table.deleteAllRows() have now been
changed so as to cancel all edit and search modes.
================(Build #4085 - Engineering Case #301764)================
Coding a loop searching for values using table.FindFirst.. table.FindNext
would have failed because table.FindNext would have always returned False,
even when the search was found.
Workaround:
Instead of looping on table.FindNext, check for table EOF
Instead of:
While table.FindNext
' process match
Wend
Use:
Do
table.FindNext
If table.EOF Then Exit Do
' process match
Loop
================(Build #4090 - Engineering Case #302654)================
ActiveX for Ultralite for CE is now available for the Pocket PC 2002 Emulator
(Intel 386 architecture).
================(Build #4230 - Engineering Case #313070)================
The ULPublicationSchema.Mask property was always returning 0. This is now
fixed
================(Build #4299 - Engineering Case #327708)================
Attempting to delete all rows matching a certain criteria, assuming there
was an appropriate index, may have missed some rows or deleted the wrong
rows.
For example:
t.findBegin();
<specify search criteria using t.set*(...)>
t.findFirst();
t.delete();
while( t.findNext() ) {
t.delete();
}
would have skipped some rows because delete() modified the search criteria.
If there were an odd number of rows matching the search criteria, this approach
would have deleted rows that did not match the search criteria.
Table.delete(), Table.truncate(), and Table.deleteAllRows() have now been
changed so as to cancel all edit and search modes.