SQL Anywhere Bug Fix Readme for Version 10.0.1, build
4225
Choose a range of build numbers for which to display
descriptions. For example if you want to see what was fixed since the last
EBF you applied then change 3415 to the build number of that last EBF. Click
Update Readme to make those changes take effect. to Update
Readme
A subset of the software with one or more bug fixes. The bug fixes are
listed below. A Bug Fix update may only be applied to installed software
with the same version number.
While some testing has been performed on the software, you should not distribute
these files with your application unless you have thoroughly tested your
application with the software.
A complete set of software that upgrades installed software from an older
version with the same major version number (version number format is
major.minor.patch). Bug fixes and other changes are listed in the "readme"
file for the upgrade.
For answers to commonly asked questions please use the following URL:
Frequently Asked Questions
If any of these bug fixes apply to your installation, iAnywhere strongly recommends
that you install this fix. Specific testing of behavior changes is recommended.
================(Build #4148 - Engineering Case #648497)================
When a consolidated was running on an Oracle server, the MobiLink server
would not have advanced the next_last_download timestamp value (it is used
for generating the download in the next synchronization) after it had run
for certain time, even if a synchronization did contain a download request.
After this occurred, the MobiLink server would have downloaded the same rows
over and over again. This has now been fixed.
The work around is to restart the MobiLink server.
================(Build #3839 - Engineering Case #558108)================
Calling the function SQLGetTypeInfo() would have returned an incorrect AUTO_UNIQUE_VALUE
column value in the result set for non-numeric types. The value return would
have been 0, but according to the ODBC specification a NULL should have been
returned.
ODBC specificatrion for AUTO_UNIQUE_VALUE :
Whether the data type is autoincrementing:
SQL_TRUE if the data type is autoincrementing.
SQL_FALSE if the data type is not autoincrementing.
NULL is returned if the attribute is not applicable to the data type or
the data type is not numeric.
The behaviour has been corrected to follow the ODBC specification.
================(Build #3798 - Engineering Case #550159)================
The ia32-libs package is required to install SQL Anywhere on 64-bit Ubuntu
distributions. It is also required to run any 32-bit software on 64-bit
Ubuntu distributions.
Futhermore, resolving host names from 32-bit applications will fail on 64-bit
Ubuntu installations, unless the package lib32nss-mdns is installed. See:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=479144
================(Build #4205 - Engineering Case #663991)================
Under some circumstances, queries with cached plans and queries containing
complex correlated subqueries may have returned incomplete results if they
were executed as parallel plans. This has been fixed.
Note, a workaround is to disable intra-query parallelism (set MAX_QUERY_TASKS=1).
================(Build #3949 - Engineering Case #587028)================
The fix for Engineering case 581991 could have inadvertently turned on the
feature for a new database architecture even if it was not requested. This
has now been fixed. This could only have affected a database that was created
using 9.0.2 build 3867-3880 or 10.0.1 build 3930-3948. The query:
select * from sa_db_properties() where propname = 'HasTornWriteFix';
can be used to determine if the setting is ON or OFF. Turning the setting
OFF requires a rebuild of the database using an engine with this fix.
================(Build #3930 - Engineering Case #581991)================
In some cases it was possible for a database to become corrupted if the last
write to disk was only partial completed, due to a power loss before the
write to disk finished. This would have been extremely rare on a desktop
machine or laptop using a conventional disk, but would have been much more
likely with flash memory on handheld devices. In either case, corruption
was more likely if the database file was encrypted. In order to prevent this
corruption it is necessary to change the physical format of the database
file. With this change in file format, the database should automatically
recover from a partial write.
WARNING: When choosing to use a database with the new file format described
below you give up the ability of being able to use older software that is
not aware of the file format change. Therefore, once you create this database
you will not be able to start an old server with such a database file. Any
attempt to start the server with a build prior to this fix will generate
an error "server must be upgraded to start 'database' ( capability 37
missing)". Syntax has been added to dbinit and the create database statement
to allow for the creation of databases with this new file format. In order
to create a database with this new format, use either:
- dbinit -xw mydb.db // -xw (creates a database with the new file format)
- CREATE DATABASE mydb.db CAPABILITY 'TORNWRITEFIX' ...
Syntax has also been added to dbunload to rebuild databases so that the
new database will be created with the new file format. In order to rebuild
a database with the old format to one with the new format use:
- dbunload.exe -c 'your connection string to your old db' -xw -ar // this
will replace your old database with a new one
- dbunload.exe -c 'your connection string to your old db' -xw -an new.db
// this will create new.db in the new file format while maintaining your
old database in the old file format
NOTE: make sure to have a backup of your database file before attempting
this. Also if your database is strongly encrypted then you'll need to provide
the encryption key for both the old database and the new database file using
the -ek command line parameter.
In order to upgrade a database with this new file format from version 9
to version 10 or version 11 you'll need to use 10.0.1 3930 or greater or
11.0.1 2278 or greater.
================(Build #3594 - Engineering Case #478618)================
A server running a database that had handled more than roughly a billion
transactions or snapshot scans in total since the database was created, could
have failed assertion 201502 "inconsistent page modification value".
A single query may start many snapshot scans, so the problem was much more
likely if using snapshot isolation. The problem is now fixed.
After the assertion, there was a very small chance of corruption that may
be present even if the server reports a successful recovery. Therefore,
validation is recommended. If problems are detected, then they should be
handled with normal procedures (e.g. restore to backup, or attempt to salvage
using an unload and reload with a server that has the bug fix).
It is possible to check the value of the internal counter involved in the
bug by starting a transaction and running the query "select start_sequence_number
from sa_transactions()". It is best to upgrade to a fixed server regardless
of the value, but a larger value indicates a workload that is more likely
to eventually trigger the counter overflow that causes the problem.
================(Build #3490 - Engineering Case #468023)================
When performing a 10.0.1 upgrade of a 10.0.0 install on Windows, when the
FIPS option was selected the dbfips10.dll is not updated. This has been fixed.
EBFs with this change will now update the dbfips10.dll.
================(Build #3474 - Engineering Case #463608)================
When generating sortkeys for the following collations, the generated sortkeys
were default UCA keys, rather than the keys appropriate to the language or
region:
47 scandict
48 scannocp
58 rusdict
59 rusnocs
63 cyrdict
64 cyrnocs
65 elldict
69 hundict
71 hunnocs
70 hunnoac
72 turdict
74 turnocs
73 turnoac
1 thaidict
This has been corrected so that appropriate keys are now generated. It is
highly recommended that any columns which store SORTKEY values using these
collations be recomputed, as comparisons and sort order will be affected.
================(Build #3470 - Engineering Case #454858)================
The server could have crashed when performing a LOAD TABLE into a temporary
table. Certain requirements are now properly enforced when loading into temporary
tables. A commit is now performed prior to the load, and after a successful
load, for any table except a LOCAL TEMPORARY TABLE with ON COMMIT DELETE
ROWS. A partial LOAD TABLE into a temporary table due to a failure will
result in the entire contents of the table being removed, including rows
in the temporary table which were present prior to the load. Previously,
a partial load would have resulted in the rows loaded to that point being
left in the table, while others were missing. Since rows are removed from
a temporary table on error, it is now also required that there be no foreign
rows already referencing rows in the table being loaded. As a result, loading
into a temporary table already containing rows with foreign key references
from another table will result in an error. In version 10.0, the use of LOAD
TABLE into a global temporary table, that was created with the SHARE BY ALL
clause, will cause an exclusive lock on the table. This will prevent concurrent
loads into the table.
================(Build #3664 - Engineering Case #496567)================
The MobiLink system table ml_qa_status_history in a consolidated database
would have grown without limit. The problem only occurs for ASE consolidated
databases. To correct this in existing ASE consolidated databases, the trigger
ml_qa_repository_trigger needs to be changed by running the following script
on the consolidated database after it has been initialized with the MobiLink
setup script:
delete from ml_qa_status_history where msgid not in (select msgid from ml_qa_repository)
go
commit
go
drop trigger ml_qa_repository_trigger
go
create trigger ml_qa_repository_trigger on ml_qa_repository for delete
as
delete from ml_qa_repository_props from deleted d, ml_qa_repository_props
p
where d.msgid = p.msgid
delete from ml_qa_delivery from deleted d, ml_qa_delivery p
where d.msgid = p.msgid
delete from ml_qa_status_history from deleted d, ml_qa_status_history p
where d.msgid = p.msgid
go
================(Build #3893 - Engineering Case #571233)================
When the -pi option 'ping MobiLink server' was used on the MobiLink client
(dbmlsync) command line, dbmlsync would have returne an exit code of 0 (indicating
success), even if it was unable to contact the Mobilink server. This has
been fixed, and a non-zero exit code will now be returned in this case.
================(Build #3685 - Engineering Case #495705)================
When the MobiLink Client was launched using the Dbmlsync Integration Component,
it was possible to view the username and password in the dbmlsync command
line in plain text. This has been corrected so that the Dbmlsync Integration
Component will encrypt all the command line options and settings before passing
them to dbmlsync. Dbmlsync will then decrypt the options and settings.
Note, this fix does not apply to the Dbmlsync Integration Component running
on Windows CE systems.
================(Build #3560 - Engineering Case #480085)================
If transaction log files for multiple databases were stored in a single directory,
the MobiLink client dbmlsync might not have been able to synchronize any
of these databases, even if there were no offline transaction log files for
any of these databases. To resolve this, a new command line option has been
added to dbmlsync:
-do disable offline logscan (cannot use with -x)
When this new option is used, dbmlsync will not attempt to scan any offline
transaction logs. Therefore, dbmlsync with -do should be able to synchronize
a database that is stored with all the other databases in a single directory,
as long as this database does not have any offline transaction log files.
However, if actual offline transaction log files are requested (for instance,
if the minimum progress offset is located
in an offline transaction log or if an uncommitted transaction starts from
an offline transaction log), dbmlsync with -do will raise an error and refuse
to run the synchronization.
================(Build #3505 - Engineering Case #471582)================
The MobiLink clients will now include its version and build number with each
synchronization, and the MobiLink server will display this information in
its log. As a result, the MobiLink log will now contain lines like the following:
Request from "Dbmlsync Version 11.0.0.1036 Debug Internal Beta"
for: remote ID: e08b39d1-e3fa-4157-969b-8d8679324c00, user name: template_U1,
version: template_test
instead of lines like this:
Request from "MLSync" for: remote ID: 45ef79ab-195e-4f76-805b-95eef2773e8f,
user name: template_U1, version: template_test
================(Build #3474 - Engineering Case #464303)================
When determining where to begin scanning the transaction log, dbmlsync will
now ignore subscriptions if they do not contain any of the tables that are
currently being synchronized. Previously when building an upload, dbmlsync
scanned the log from the lowest progress value of any subscription involving
the MobiLink user who is synchronizing.
To take advantage of this optimization, you should now define your publications
as disjoint (not sharing any tables) whenever possible. This will result
in a major performance improvement when one subscription is being synchronized
more frequently than another.
For example, suppose publications P1 and P2 share no tables. P2 is synchronized
daily and P1 is synchronized hourly. Each time P1 is synchronized, its progress
is advanced by 1000. The following table shows the segment of log scanned
for each synchronization based on the old and new behavior:
log scanned log scanned
Action P1's progress P2's progress (old behavior) (new behavior)
sync P1 1000 1000 1000-2000 1000-2000
sync P1 1000 2000 1000-3000 2000-3000
sync P1 1000 3000 1000-4000 3000-4000
sync P1 1000 4000 1000-5000 4000-5000
You should be able to define your publications as disjoint except where
two publications contain the same table but with different WHERE clauses.
Defining disjoint publications should never limit functionality because the
dbmlsync -n option can accept a comma separated list of publications which
causes the union of the publications to be synchronized.
For example, suppose table T1 is to be synchronized throughout the day,
and tables T1 and T2 are to be synchronized at the end of the day. Previously,
you might have defined your publications as follows:
- P1 contains T1
- P2 contain T1 and T2.
- Synchronize during the day with the dbmlsync option -n P1
- Synchronize at the end of the day with the dbmlsync option -n P2
In order to take advantage of the new optimization, two publications should
be defined: P1 contains T1, and P2 contains T2. During the day dbmlsync
will synchronize using the -n P1 option. At the end of the day dblmsync
will synchronize using -n P1,P2. This does the same thing, but is much more
efficient with the new log scanning behavior.
================(Build #4111 - Engineering Case #626480)================
An extension to the TLS protocol for session renegotiation has been made
to fix a recently discovered vulnerability (RFC 5746). Although SQL Anywhere
software is not directly vulnerable, third-party servers that it communicates
through may be vulnerable. SA clients now support this TLS extension which
will allow vulnerable third-party servers to be secured.
================(Build #3494 - Engineering Case #469151)================
The MobiLink server now closes non-persistent HTTP connections without waiting
for the client to close the connection, as there is no need to hang on to
the connection after the HTTP response is sent. This change modestly improves
server resource usage when using non-persistent HTTP.
================(Build #4047 - Engineering Case #623106)================
New http, https and oe stream options have been added to the MobiLink server
that will cause it to print additional errors, analogous to the errors printed
by the -vf option.
Usage:
-x http(...;log_bad_request={yes|no})
-x https(...;log_bad_request={yes|no})
-x oe(...;log_bad_request={yes|no})
The default value for these noew options is "no".
If log_bad_request is enabled and a request disconnects before the server
receives a complete set of HTTP headers, the server will print these errors:
[-10117] Stream Error: Failed reading an incomplete HTTP request
[-10117] Stream Error: This connection will be abandoned because of previous
errors
If log_bad_request is enabled and a request contains an unknown User-Agent
or unknown request type, the server will print these errors:
[-10117] Stream Error: Unknown HTTP User-Agent or request type
[-10117] Stream Error: This connection will be abandoned because of previous
errors
This option is most useful when debugging network issues. For example,
you can connect to the ML server using a web browser on the remote device
and if the device can reach the server, then these errors will be printed.
================(Build #3838 - Engineering Case #558085)================
The MobiLink server now supports two new command options,
-vR show the remote ID in each logging message
-vU show the ML user name in each logging message
When both -vR and -vU are specified, the MobiLink server will add the remote
ID and the MobiLink user to each message logged:
yyyy-mm-dd hh:mm:ss. <sync_id> ({remote_id},{user_name})
When started with -vR and without -vU, the prefix will be just the remote_id:
yyyy-mm-dd hh:mm:ss. <sync_id> (remote_id,)
and the MobiLink user name will be empty. When started with -vU and without
-vR, the
prefix will be just the user name:
yyyy-mm-dd hh:mm:ss. <sync_id> (,user_name)
and the remote ID will be empty.
The new feature may be useful for MobiLink users who are running the MobiLink
server with the command options -on or -os, as the logging messages for a
synchronization can span multiple MobiLink server logging files, which makes
it is hard to find out the remote ID and MobiLink user name for a given sync
ID from such logs. This extra logging information will only apply to the
synchronization threads. For the main thread of the MobiLink
server, the logging messages will still contain the following prefix,
yyyy-mm-dd hh:mm:ss. <Main>
because there is no remote ID or MobiLink user name for the main thread.
These two command line options will not be affected by the -v+ option, that
is, the MobiLink server will not add the remote ID or the ML user name into
its logging messages, even if the -v+ option is used. Therefore, the description
for the -v+ option
has been changed to - "show all verbose logging specified with lower
case letters".
================(Build #3595 - Engineering Case #486422)================
The MobiLink server for Windows x64 now supports both RSA and ECC stream
encryption. Note that the MobiLink server does not support FIPS-compliant
RSA on Windows x64.
================(Build #3508 - Engineering Case #472237)================
The MobiLink server is now supported on Windows x64. The supported consolidated
databases are SQL Anywhere and Oracle (see Engineering case 472238 for details).
An x64 JRE install is required, which is available at:
http://downloads.sybase.com/swd/detail.do?baseprod=144&client=ianywhere&relid=10260
To install this JRE:
Download and run SA_1001_JRE_UPD.exe. The install will locate an existing
SQL Anywhere 10 installation, and install a Java Runtime Environment for
x64.
To install the x64 MobiLink server:
1) Install the EBF normally on a x64 computer, and then go to Add/Remove
Programs (Programs and Features in Vista), select SQL Anywhere 10 and click
on the Change button.
Check that Modify is selected and click on the Next button.
2) The Select Features dialog will appear and it will show the MobiLink
for x64 features under "MobiLink". 32 bit MobiLink features will
be displayed under "MobiLink (32 bit)".
Check the checkboxes for MobiLink to install MobiLink for x64 and click
on the Next button.
3) Click the Next button on the next dialog to implement the modifications.
4) Click the Finish button on the last dialog to complete the installation.
================(Build #3492 - Engineering Case #468862)================
Previouly, the MobiLink server only supported "blocking download ack"
mode. Now, the MobiLink server also supports "nonblocking download ack"
mode. This mode may be controlled with the "-nba+" (use nonblocking
download ack) or "-nba-" (use blocking download ack) command line
options, with "-nba-" being the default. Nonblocking download acknowledgement
mode provides a significant performance advantage over blocking download
acknowledgement mode, but may not be compatible with some existing scripts.
The following is the documentaion update:
MobiLink now supports two modes of download acknowledgement: blocking and
non-blocking. Prior to this change, MobiLink only supported blocking download
acknowledgement. When you turn on download acknowledgement, the default continues
to be blocking.
Note: Download acknowledgement is not on by default. To turn it on, use
the dbmlsync "SendDownloadACK" extended option or the UltraLite
"Send Download Acknowledgment" synchronization parameter. Download
acknowledgement is not required to ensure that data is successfully downloaded.
Download acknowledgement simply allows you to get acknowledgement of a successful
download immediately (in the download); otherwise the acknowledgement occurs
in the next upload.
To set non-blocking download acknowledgement, use the new mlsrv10 option
-nba+.
Non-blocking download acknowledement is recommended because it provides
a significant performance advantage over blocking download acknowledgement.
However, non-blocking download acknowledgement cannot be used in the following
cases:
- Clients prior to 10.0.0 do not support non-blocking acknowledgement.
- You cannot use non-blocking acknowledgement with the Notifier scripts
that are generated by MobiLink Model mode.
QAnywhere messaging uses non-blocking download acknowledgement. You cannot
change this setting for QAnywhere: You will get an error if you specify both
-m and -nba- on the mlsrv10 command line.
To provide extra optional functionality to the new non-blocking download
acknowledgement, two new MobiLink events have been added: nonblocking_download_ack
and publication_nonblocking_download_ack.
nonblocking_download_ack connection event
-----------------------------------------
Function: When using non-blocking download acknowledgement, this event lets
you update your database based on the successful application of the download.
Parameter name Description Order
-------------- ----------- -----
s.remote-id VARCHAR(128) N/A
s.username VARCHAR(128) 1
s.last_download TIMESTAMP 2
This event is only called when using non-blocking download acknowledgement.
When in non-blocking mode, the download transaction is committed and the
synchronization ends when the download is sent. This event is called when
the synchronization client acknowledges a successful download. This event
is called on a new connection, after the end_synchronization script of the
original synchronization. The actions of this event are committed along with
an update to the download time in the MobiLink system tables.
Due to the special nature of this script, any connection-level variables
set during the synchronization are not available when this event is executed.
For example, the following script adds a record to the table download_pubs_acked.
The record contains the remote ID, first authentication parameter, and the
download timestamp.
INSERT INTO download_pubs_acked( rem_id, auth_parm, last_download )
VALUES( {ml s.remote_id}, {ml a.1}, {ml s.last_publication_download}
)
publication_nonblocking_download_ack connection event
-----------------------------------------------------
Function: When using non-blocking download acknowledgement, this event lets
you update your database based on the successful application of the download
of this publication.
Parameter name Description Order
-------------- ----------- -----
s.remote-id VARCHAR(128) N/A
s,username VARCHAR(128) 1
s.last_publication_download TIMESTAMP 2
s.publication_name VARCHAR(128) 3
s.subscription_id VARCHAR(128) 4
This event is only called when using non-blocking download acknowledgement.
When in non-blocking mode, the download transaction is committed and the
synchronization ends when the download is sent. When the synchronization
client acknowledges a successful download, this event is called once per
publication in the download. This event is called on a new connection and
after the end_synchronization script of the original synchronization. The
actions of this event are committed along with an update to the download
time in the MobiLink system tables.
Due to the special nature of this script, any connection-level variables
set during the synchronization are not available when this event is executed.
For example, the following script adds a record to a table called download_pubs_acked.
The record contains the publication name, the first authentication parameter,
and a download timestamp:
INSERT INTO download_pubs_acked( pub_name, auth_parm, last_download )
VALUES( {ml s.publication_name}, {ml a.1}, {ml s.last_publication_download}
)
================(Build #3474 - Engineering Case #464488)================
The MobiLink server now buffers pending writes more efficiently during HTTP
synchronizations. This change allows the server to use significantly less
memory, which may make for less swapping to disk.
================(Build #3746 - Engineering Case #539813)================
A new command line option (-sv) has been added to the MobiLink Listener to
allow for specifying the script version used for authentication. The default
value is ml_global.
================(Build #3509 - Engineering Case #472228)================
The Mobilink Redirector now support "IIS6 in IIS5 isolation mode".
IIS5 allows processing of multiple persistent HTTP request responses within
a single entry into the web server extension. However IIS6 in IIS5 isolation
mode disallows that possibly due to the stricter HTTP system driver. To work
around this, the redirector will exit the extension after a single request-response
cycle, while persisting the backend connection for later use. This change
should increase concurrency, especially when the backend servers are the
bottleneck.
To turn on IIS5 isolation mode on a IIS6 server using the IIS Manager, right
click on IIS Manager->Web Sites and select Properties. Bring up the Service
page and select the "IIS 5.0 isolation mode".
================(Build #3592 - Engineering Case #483452)================
The IAS Oracle driver requires Oracle's OCI client libraries. If these libraries
are not installed properly, the IAS driver would have silently failed to
load. The driver will now load,, but will fail when first used by an application.
The application can query the failure and will receive an error which indicates
the OCI library is missing.
================(Build #4072 - Engineering Case #629284)================
The Visual Studio "Add Connection" wizard will display SQL Anywhere
and Adaptive Server Anywhere ODBC Data Source names in the pick list when
the SQL Anywhere .NET provider is used for the connection. SQL Anywhere integration
with Visual Studio has been improved to also show Sybase IQ ODBC Data Source
names in the pick list.
================(Build #3602 - Engineering Case #482851)================
The provider now supports using named parameters. If all the parameter names
are specified, the provider will map them to the parameter values when the
command is executed. The order of parameters does not have to be the same
as the order of host variables when using named parameters.
For example, using named parameters when calling a procedure:
SACommand cmd = new SACommand( "MyProc", conn );
cmd.CommandType = CommandType.StoredProcedure;
SAParameter p4 = new SAParameter( "p4", SADbType.Integer );
p4.Direction = ParameterDirection.Output;
cmd.Parameters.Add( p4 );
SAParameter p3 = new SAParameter( "p3", SADbType.NChar, 30
);
p3.Direction = ParameterDirection.Output;
cmd.Parameters.Add( p3 );
SAParameter p2 = new SAParameter( "p2", SADbType.Char, 20
);
p2.Direction = ParameterDirection.InputOutput;
p2.Value = "222";
cmd.Parameters.Add( p2 );
SAParameter p1 = new SAParameter( "p1", SADbType.Integer );
p1.Direction = ParameterDirection.Input;
cmd.Parameters.Add( p1 );
cmd.ExecuteNonQuery();
given following procedure definition;
CREATE PROCEDURE MyProc( in p1 int, inout p2 char(20), out p3 nchar(30),
out p4 int )
BEGIN
SET p2 = p2 + 'abc';
SET p3 = '333xyz';
SET p4 = p1 * 4;
END
Using named parameters in a query:
SACommand cmd = new SACommand( "UPDATE MyTable SET name = :name
WHERE id = :id", conn );
SAParameter p1 = new SAParameter( "id", SADbType.Integer );
p1.Direction = ParameterDirection.Input;
p1.Value = 1;
cmd.Parameters.Add( p1 );
SAParameter p2 = new SAParameter( "name", SADbType.Char, 40
);
p2.Direction = ParameterDirection.Input;
p2.Value = "asdasd";
cmd.Parameters.Add( p2 );
cmd.ExecuteNonQuery();
================(Build #3509 - Engineering Case #472489)================
The .NET 2.0 framework introduced a new namespace 'System.Transactions',
which contains classes for writing transactional applications. Client applications
can create and participate in distrubuted transactions with one or multiple
participants. These applications can implicitly create transactions using
TransactionScope class. The connection object can detect the existence of
an ambient transaction created by the TransactionScope and automatically
enlist. The applications can also create a CommittableTransaction, and call
the EnlistTransaction method to enlist.
Distributed transaction has significant performance overhead. It is recommand
to use database transactions for non-distributed transactions.
================(Build #3818 - Engineering Case #553310)================
An application that uses the iAnywhere JDBC driver must now have the jodbc.jar
built with the same build number as the dbjodbc and mljodbc shared objects.
If the jar and shared objects are out of sync, a SQLException will be thrown
at connect time and the connection will be refused.
================(Build #3797 - Engineering Case #549932)================
The iAnywhere JDBC driver has supported the PreparedStatement.addBatch()
and PreparedStatement.executeBatch() methods for quite some time now, but,
these methods were only supported for INSERT statements. These methods will
now also be supported for UDATE and DELETE statements, provided the underlying
connection is to an SA server. If the underlying connection is to a non-SA
server, then these methods will still only be supported for INSERT.
================(Build #3721 - Engineering Case #536335)================
If an application generated a result set via one DatabaseMetaData call, and
then generated a second result set via another DatabaseMetaData call, then
the first result set would have been automatically closed. This behaviour
is not incorrect, and is consistent with many other JDBC drivers. However,
some applications have had the need to keep two separate DatabaseMetaData
result sets open at the same time. The iAnywhere JDBC driver has now been
enhanced to allow up to three separate DatabaseMetaData result sets to remain
open at the same time.
================(Build #3721 - Engineering Case #534307)================
If an application using the iAnywhere JDBC driver attempted to use the optional
DatabaseMetaData.getUDTs() method, the driver would have throw a "not
yet implemented" exception. The iAnywhere JDBC driver has now been enhanced
to return a proper result set for the getUDTs() method if the driver is connected
to an SA database. For all non-SA servers, the iAnywhere JDBC driver will
continue to throw the "not yet implemented" exception.
================(Build #3508 - Engineering Case #472478)================
The iAnywhere JDBC driver has now been enhanced to support the DB2 Mainframe
ODBC driver. The iAnywhere JDBC driver now checks the DBMS name of the DB2
driver, if it is the DB2 Mainframe driver it will set the appropriate default
result set type and other attributes.
================(Build #3493 - Engineering Case #468630)================
The iAnywhere JDBC Driver currently supports the ResultSet.getBlob() method
even though this method is optional in the JDBC specification. However, the
ResultSet.getBlob().getBinaryStream() method (which is also optional in the
specification) was not supported. Some applications insist that Blob.getBinaryStream()
be supported if getBlob() is supported. As a result, the iAnywhere JDBC Driver
now supports ResultSet.getBlob().getBinaryStream().
================(Build #3972 - Engineering Case #594888)================
The performance of ODBC metadata functions, such as SQLPrimaryKeys, SQLTables,
and SQLColumns, has been improved for case-sensitive databases. This performance
improvement will not occur for case-sensitive databases if SQLSetStmtAttr
is called to set the SQL_ATTR_METADATA_ID attribute to SQL_TRUE. However,
by default, this attribute is set to SQL_FALSE. When set to SQL_FALSE, case-sensitive
databases will now enjoy the same performance as case-insensitive databases.
Please note that when the SQL_ATTR_METADATA_ID attribute is set to SQL_TRUE,
the string arguments to metadata functions are treated as identifiers, not
strings (or patterns like "co%"). Identifiers can be delimited,
in which case leading and trailing spaces are removed. Identifiers need not
be delimited, in which case trailing spaces are removed.
================(Build #4139 - Engineering Case #645959)================
Microsoft SQL Server has introduced two data types DBTYPE_DBTIME2 and DBTYPE_DBTIMESTAMPOFFSET
that are not part of the OLE DB specification. Support for conversions between
these two types and DBTYPE_STR, DBTYPE_WSTR, DBTYPE_DBDATE, DBTYPE_DBTIME,
and DBTYPE_DBTIMESTAMP has now been added to the SQL Anywhere OLE DB provider.
DBTYPE_DBTIME2 differs from DBTYPE_DBTIME in that fractional seconds are
included. The type corresponds to the Microsoft SQL Server TIME data type.
DBTYPE_TIMESTAMPOFFSET adds support for a timezone offset (hours/minutes).
The type corresponds to the Microsoft SQL Server DATETIMEOFFSET data type.
================(Build #3771 - Engineering Case #543888)================
The SQL Anywhere OLE DB provider did not support multiple parameter sets
in the ICommand::Execute method. The number of parameter sets is specified
in the cParamSets field of the DBPARAMS structure, for example:
HRESULT Execute (
IUnknown *pUnkOuter,
REFIID riid,
DBPARAMS *pParams,
DBROWCOUNT *pcRowsAffected,
IUnknown **ppRowset);
struct DBPARAMS {
void *pData;
DB_UPARAMS cParamSets;
HACCESSOR hAccessor;
};
This is now supported, so it is now possible to execute one INSERT statement
and specify several sets of parameters in order to insert several rows into
a table.
Note the following OLE DB specification restriction:
Sets of multiple parameters (cParamSets is greater than one) can be specified
only if DBPROP_MULTIPLEPARAMSETS is VARIANT_TRUE and the command does not
return any rowsets.
This means that multiple parameterized SELECT statements can not be executed
that would each return a result set. For the SQL Anywhere provider, DBPROP_MULTIPLEPARAMSETS
is VARIANT_TRUE (and always has been).
================(Build #3767 - Engineering Case #545096)================
The latest release of the Perl, PHP, etc language drivers now use the C API
that was released with SQL Anywhere 11.0. To allow those new drivers to be
backward compatible with 10.0.1, support has been added for this new C API
to 10.0.1. Note that the language drivers that ship with 10.0.1 EBFs will
continue to be the ESQL version.
================(Build #3624 - Engineering Case #489694)================
Support has now been added for Windows Mobile 5 SmartPhone Edition and Windows
Mobile 6 Standard edition. All SQL Anywhere Windows CE functionality is supported
on the SmartPhone, except for the following:
- the SharedMemory communication protocol is not supported. The TCP/IP
communication protocol is used even if no protocol is specified, and a server
name must always be specified when making a connection, or the connection
will fail.
- the preferences dialog on the database server and MobiLink client is not
supported. The preferences dialog normally appears if no command line options
are used.
- the ODBC and OLEDB connection prompt dialog is not supported. This dialog
normally may appear depending on the DriverCompletion parameter to SQLDriverConnect,
or the DBPROP_INIT_PROMPT OLEDB property.
- the Unload / Reload support is not available. This includes dbunload.exe,
dbrunsql.exe and the unload support server.
================(Build #3470 - Engineering Case #461688)================
The Deployment Wizard was not including the utility for unloading database
files created with versions earlier than 10.0.0 (dbunlspt.exe and associated
files). These files will now be included by selecting the new "Unload
Support for pre 10.0 databases" feature which has been added under "SQL
Anywhere Server\Database Tools" in the Deployment Wizard.
================(Build #4131 - Engineering Case #643701)================
HP uses the character set name hp15CN as an alias for GB2312. Support for
this alias has been added. Current builds of 11.0.1 and up already handled
this alias.
================(Build #4048 - Engineering Case #623610)================
Processor topology detection for x86/x64 processors has been improved to
detect new SMT processors correctly (ie processors with multiple threads
per core that are not the older "hyperthread" implementation).
Previously, a quad-core i7 (for example) with two threads per core would
be detected as 8 cores rather than 4 cores with 2 threads per core. The algorithm
for distributing database server threads among logical processors when using
less than the maximum concurrency permitted (via the database server -gtc
switch) now correctly takes the 3-level chip/core/thread topology into consideration.
Generally, this change does not affect licensing since association of a logical
processor with the actual chip containing each logical processor was still
correct in the old code with the possible exception of some newer Intel 6-core
processors.
On Mac OS X where the operating system does not provide interfaces to control
processor affinity, exact processor topology cannot be determined, so SQL
Anywhere treats each logical processor as a separate package or "socket".
On multicore and SMT processors, OSX users should purchase the correct license
for the hardware they are using, but install a license that will allow the
correct amount of concurrency. For example on a quad-core i7 with two threads
per core, purchase a license for 1 CPU "socket" but install a license
for 8 cpu "sockets" since each processor thread will be treated
as a separate CPU socket.
Feature tracking code has also been changed so that the 3-level topology,
CPU brand string and CPU info registers (which do not form a unique machine
identifier) are reported when crash reports are sent to Sybase.
================(Build #4047 - Engineering Case #622024)================
If an application attempted to perform an integrated login from one Windows
machine to a SQL Anywhere database server running on a different Windows
machine; and the machine that the database server was running on was not
the domain controller; and the Windows userid that the application was using
was not explicitly mapped in the database; and the application was expecting
that the server would instead map the application's Windows userid to a Windows
user group on the Domain Controller, then there was a chance the integrated
login would fail to map the Windows group.
For example:
1) suppose the domain controller was Windows machine DC, and
2) suppose the application was running on Windows machine App with Windows
userid AppUser, and
3) suppose the database server was running on Windows machine SAServ with
Windows userid ServUser, and
4) suppose the domain controller had a Windows user group GRP of which AppUser
was a member, and
5) suppose the database did not grant explicit integrated login privileges
to AppUser but instead had granted integrated login privileges to GRP instead,
then there was a chance that the application would fail to establish an
integrated login to the db userid that GRP was mapped to. This problem has
now been fixed.
================(Build #3984 - Engineering Case #605668)================
If an application that was connected via jConnect or Open Client attempted
to insert or retrieve a datetime or time value, then the date portion of
the value was limited to January 1, 1753 or later, and the time portion was
restricted a precision of 1/300th of a second. Now, if an application uses
newer versions of jConnect and Open Client, then the date portion of datetime
values will span the full range from 0001-01-01 to 9999-12-31, and the time
portion will now be handled in microsecond precision.
================(Build #3873 - Engineering Case #566651)================
A SERVICE may invoke a procedure that explicitly sets the 'Connection' and
'Content-Length' HTTP response headers using the SA_SET_HTTP_HEADER system
procedure. The setting of the 'Content-Length' was ignored, and the setting
of 'Connection:close' implicitly disabled chunked-mode transfer encoding.
Changes have been made to provide greater control over SQLAnywhere HTTP server
responses. The following is a summary of the new behaviour:
Client is HTTP/1.0:
The server does not support Keep-Alive and Chunk-mode operation for this
HTTP version.
By default the server never sets the 'Transfer-Encoding' header and always
sets 'Connection: close' header, shutting down the connection once the response
has been sent. A SERVICE procedure may set the 'Content-Length' header but
setting the 'Connection' header is ignored.
Client is HTTP/1.1:
By default the server responses use chunked-mode transfer encoding and automatically
set the 'Transfer-Encoding: chunked' header. If the SERVICE procedure explicitly
sets a 'Content-Length' header to some value, then the 'Content-Length' header
is sent in place of the 'Transfer-Encoding' header and the response body
is not chunk-mode encoded. Note: It is an error for a SERVICE procedure
to set both a 'Content-Length' and 'Transfer-Encoding' header.
The server will assume 'Connection: keep-alive' if the client does not send
a 'Connection' request header. If a client explicitly sends a 'Connection:
close' request header and/or the SERVER procedure explicitly sets 'Connection:
close' the server will shutdown the connection once the response has been
sent.
Setting Content-Length
In most cases data will need to be buffered in order to calculate the Content-Length.
Therefore, it is recommended to use chunk-mode transfer encoding whenever
possible. If 'Content-Length' must be set, then care must be taken to ensure
that the result-set is not character set translated when the response is
composed. It is recommended that the 'CharsetConversion' http option is
set to off when returning textual data. Also, setting 'Content-Length' should
only be done within a TYPE 'RAW' SERVICE since some services (i.e. 'HTML',
'JSON') add content to the response.
A check has been added to ensure that the actual length of the response
matches the value of 'Content-Length' header. If the values do not match
then the server will shutdown the connection, once the response has been
sent, regardless of whether or not a 'Connection: keep-alive' response header
has been sent.
================(Build #3837 - Engineering Case #557527)================
A SQL Anywhere HTTP client function was only able to return a varchar data
type. Support has now been added to the HTTP client so that it can also be
defined to return binary, varbinary or long binary data types, i.e.:
CREATE function client() RETURNS long binary URL 'http://localhost/image_service/...'
TYPE 'HTTP:GET'
Note, this change only extends the semantic meaning of the returned value,
declaring the return data type as binary does not change the behaviour at
the transport level. Textual data may still be converted to the database
character set based on Content-Type HTTP header or SOAP envelope encoding
criteria.
================(Build #3836 - Engineering Case #557363)================
By default (or with SET 'HTTP(CH=auto)') an SA HTTP client procedure would
have sent its HTTP request using chunk mode transfer encoding when posting
data that was greater than 2048 bytes. If the server rejected the request
with a 501 "Not Implemented" or 505 "HTTP Version Not Supported"
status, the procedure would have automatically re-issued the request without
using chunk transfer encoding. When in default mode, an SA client would
not have used chunk transfer encoding when posting data that was less than
2048 bytes in length. This has now been changed so that the data byte limit
is now 8196 bytes, from 2048 bytes, and the status 411 "Length Required"
has been added to its criteria for re-issuing the request without using chunk
mode transfer encoding.
================(Build #3834 - Engineering Case #555976)================
With the release of SA 10.0.0, identifiers were restricted such that they
could no longer include double quotes or backslashes. Unfortunately, if an
application wants to create an externlogin to a remote SQL server using secure
logins, then the remote login needs to be specified in the form user\domain.
As a result, the remote login specification of a create externlogin statement
has now been extended to accept both identifiers and strings. Note that no
catalog changes have been made; hence, the remote login specification is
still restricted to 128 bytes.
================(Build #3805 - Engineering Case #552503)================
SQL statements which don't contain Transact-SQL OUTER JOINs, OUTER JOINs,
KEY JOINs, or NATURAL JOINs will now skip some of the optimizations implemented
in the server, which will improve the DESCRIBE time.
================(Build #3736 - Engineering Case #540195)================
When a connection attempted to autostart a server, but then failed to connect,
the client incorrectly attempted to autostart the server three times in some
cases. This has been fixed so that the client will now only attempt to autostart
the server once.
================(Build #3721 - Engineering Case #394857)================
Not all OLE DB schema rowsets are supported, however the most common and
useful rowsets are supported. Two OLE DB schema rowsets that were not supported
have now been implemented.
CATALOGS: The CATALOGS rowset identifies the physical attributes associated
with catalogs accessible from the DBMS. SQL Anywhere does not support the
notion of catalogs as some other database systems do. With that in mind,
the SQL Anywhere OLE DB provider will return a result set for CATALOGS containing
all currently started databases. The following is an example.
CATALOG_NAME DESCRIPTION
AnotherSample c:\SQLAnywhere10\Samples\sample.db
demo c:\SQLAnywhere10\Samples\demo.db
The CATALOG_NAME column contains the database name. The DESCRIPTION column
contains the physical location of the database on the database server computer.
SCHEMATA: The SCHEMATA rowset identifies the schemas that are owned by a
given user. The following is an example of a SCHEMATA rowset returned by
the SQL Anywhere OLE DB provider.
CATALOG_NAME SCHEMA_NAME SCHEMA_OWNER DEFAULT_CHARACTER_SET_CATALOG DEFAULT_CHARACTER_SET_SCHEMA DEFAULT_CHARACTER_SET_NAME
demo dbo dbo demo SYS windows-1252
demo GROUPO GROUPO demo SYS windows-1252
demo ml_server ml_server demo SYS windows-1252
demo rs_systabgroup rs_systabgroup demo SYS windows-1252
demo SYS SYS demo SYS windows-1252
The CATALOG_NAME column contains the name of the database to which you are
currently connected. The SCHEMA_NAME and SCHEMA_OWNER columns contain identical
values for SQL Anywhere databases. The DEFAULT_CHARACTER_SET_CATALOG column
always contains the name of the database to which you are currently connected
since character sets are associated with databases. The DEFAULT_CHARACTER_SET_SCHEMA
column is arbitrarily set to SYS since the character set in use for the database
is not owned by anyone. The DEFAULT_CHARACTER_SET_NAME column contains the
value of the "CharSet" database property
Note, to get this new functionality in existing databases, do the following:
9.0.2 - upgrade the databases by loading and running Scripts\oleschem.sql
using Interactive SQL.
10.0.1 - run dbupgrad on each database, or connect to each database and
run ALTER DATABASE UPGRADE. As an alternative, the databases can be upgraded
by running Scripts\oleschem.sql using Interactive SQL.
11.0.0 - run dbupgrad on each database, or connect to each database and
run ALTER DATABASE UPGRADE.
================(Build #3710 - Engineering Case #533012)================
The connection property 'IsDebugger' has been added to allow connections
which are currently being used to run the procedure debugger to be distinguished
from normal connections. The value of connection_property('IsDebugger',number)
will be 'Yes' if "number" corresponds to the connection number
of a debugger connection, and 'No' otherwise.
================(Build #3707 - Engineering Case #532238)================
In SQL Anywhere 10, SOAP web services used the server's HTTP HOST header
to generate the namespace returned in the SOAP response. This behaviour was
changed in SQL Anywhere 11 to always use the namespace sent in the SOAP request
for the response. This change in behaviour has now been integrated into 10.0
in order to better support SOAP clients built with version 10.0 and connect
to a server that has been upgraded to version 11.0.
================(Build #3704 - Engineering Case #528793)================
A new connection and database property called "Authenticated" has
now been added. The use of these two new properties is as follows:
For OEM servers, once an application has executed the "SET TEMPORARY
OPTION CONNECTION_AUTHENTICATION=" statement, the application can then
turn around and execute a "SELECT connection_property( 'Authenticated'
)" statement. If the result is "YES", then the connection
was properly authenticated and all is well. If, however, the result is "NO",
then the application can execute a "SELECT db_property( 'Authenticated'
)" statement. If the result of this statement is "YES", then
the database has been properly authenticated and the connection authentication
failed because the CONNECTION_AUTHENTICATION string is incorrect. If, on
the other hand, the result of querying the Authenticated database property
is "NO", then the connection authentication failed because the
database has not been properly authenticated. In this case, the customer
should examine the DATABASE_AUTHENTICATION string to determine what is wrong.
For non-OEM servers, the result of querying these new properties will always
be "NO".
================(Build #3643 - Engineering Case #491315)================
Version 10.0.0 of SQL Anywhere added support for the OPTION clause in the
SELECT statement. The OPTION clause has now been extended to the INSERT,
UPDATE, DELETE, SELECT, UNION, EXCEPT, and INTERSECT statements. The clause
allows users to override the settings for the following connection level
options at the statement level:
- isolation_level option [compatibility]
- max_query_tasks option [database]
- optimization_goal option [database]
- optimization_level option [database]
- optimization_workload option [database]
The server will now raise an "invalid option setting" error in
a predictable fashion if one of the unsupported options is used in the clause.
There was a possibility of the server leaking a small amount of memory that
has also been corrected.
================(Build #3635 - Engineering Case #490798)================
In a TOP n START AT m clause, the values of n and m were previously restricted
to being constants or variables. They are now allowed to be host variable
references. For example:
select top ? table_name
from SYS.SYSTAB
order by table_id
================(Build #3616 - Engineering Case #486288)================
The best plan for a query block with a DISTINCT clause, and one or more joins,
may be to use semijoins instead of inner joins based on the expressions used
in the select list. Inner joins can be executed as semijoins for tables,
derived tables, or views whose expressions are not used in the select list.
The server will now make a cost-based decision to use inner joins or semijoins
during the optimization phase
================(Build #3616 - Engineering Case #421744)================
For an n-way join query block with a DISTINCT clause, the optimizer can now
chose a plan using semijoins instead of inner joins, based on the expressions
used in the select list. The inner joins can be executed as semijoins for
tables, derived tables, or views whose expressions are not used in the select
list. The decision to use inner joins or semijoins is done cost-based during
optimization by the SA optimizer.
Examples:
select distinct p.* from product p, (select prod_id, id, count(*) C from
sales_order_items group by prod_id, id )
as DT, sales_order as so
where p.id = DT.prod_id and so.id = DT.id
================(Build #3601 - Engineering Case #485349)================
Predicates of the form "column IS NULL" are eliminated if the column
is declared NOT NULL earlier now in the optimization process. This increases
the oportunity for rewrite optimizations to be performed.
================(Build #3600 - Engineering Case #485254)================
When using the SQL Anywhere debugger to step through a procedure or function,
if the current statement was an INSERT, UPDATE or DELETE, it was possible
to step into the trigger that would have been fired when the statement executed,
but if more than one trigger would have fired, it was not possible to step
through to the next trigger if it existed. This has now been corrected.
================(Build #3571 - Engineering Case #481824)================
An XML parameter can now be passed to an external function.
CREATE PROCEDURE mystring( IN instr XML )
EXTERNAL NAME 'xmlfunc@xmltools.dll';
XML parameters are passed in the same manner as LONG VARCHAR parameters.
XML parameters are available when using the "new" external function
call API.
================(Build #3551 - Engineering Case #478500)================
The Swedish tailoring of the UCA collation did not conform to the 2005 standards
of the Swedish Academy. In that standard, V and W were changed to be considered
different characters at the primary level. To support this change a new tailoring
has been implemented. To avoid incompatibilities with existing Swedish databases,
the new tailoring was implemented as the "phonebook" sorttype variant
of the Swedish UCA tailoring. For example:
dbinit -zn UCA(locale=swe;sorttype=phonebook)
================(Build #3528 - Engineering Case #473835)================
The HTTP server can now disable connections using SSLv2. The database server
command line option -xs now supports the SSLv2 parameter whose values can
be YES or NO. SSLv2 defaults to YES, but can be set to NO to disallow HTTPS
connections using SSL version 2.0. For example,
dbeng10 web.db -xs HTTPS(SSLv2=N)
================(Build #3521 - Engineering Case #473621)================
Attempting to autostart a server with no license file installed, would have
failed with the error "Unable to start specified database: failure code
1." The license file has the same name as the server executable, with
a .lic extension. This has been fixed so that the more descriptive error
"Unable to start database server: missing license file" is now
given in this case.
================(Build #3500 - Engineering Case #446914)================
With this change, it is now possible to restrict the permissions of temporary
files created by the server and/or client. Traditionally, these files were
unconditionally created with global read, write and execute permissions.
To use this feature, a directory must be specified using the SATMP environment
variable and this directory must not be one of the standard locations:
- /tmp
- /tmp/.SQLAnywhere
- the value of the TMP environment variable, if set
- the value of the TMPDIR environment variable, if set
- the value of the TEMP environment variable, if set
- a symbolic link pointing to any of the above
When SATMP is set to such a non-standard location, the server and client
will walk up the given directory path looking for directories owned by the
current user with permissions set to 707, 770 or 700. For each directory
found, the appropriate permissions (other, group, other+group respectively)
will be set from the permission mask used to create temporary files.
For example, if the SATMP environment variable is set to: /tmp/restricted_permissions/sqlany,
where restricted_permissions is a directory with permissions 700, then all
files created in this directory will have permissions 700.
================(Build #3493 - Engineering Case #468867)================
When run on Windows CE devices, 1MB per of address space was reserved by
the server for each thread, although only a portion was actually allocated,
or "committed". This has been changed for Windows CE PocketPC 2003
and newer devices, as the server now implements the -gss command line option
for these devices. The -gss option sets the stack size per internal execution
thread. The default and minimum stack size is 64K and the maximum is 512K.
================(Build #3486 - Engineering Case #462899)================
When using the command line options -qw "do not display database server
screen" or -qi "do not display database server tray icon",
a number of messages that should have been logged to the -o output file were
suppressed. This has now been corrected.
================(Build #3485 - Engineering Case #455926)================
An attempt to create a base table with the same name and owner as that of
an existing local temporary table is permitted by the server. However, the
newly created table could not have been accessed until the local temporary
table with the same name and owner was dropped from the current scope. The
server will now disallow the creation of a base table in this scenario.
================(Build #3470 - Engineering Case #442935)================
A predicate of the form:
<column> LIKE <pattern>
where <column> is a column of exact numeric type <DOMAIN> and
<pattern> is a constant string containing no wild-cards should generate
a sargable predicate:
<column> = CAST( <pattern> AS <DOMAIN> )
For example, the query:
select * from systab where table_id like '1'
should consider using an index scan on systab.table_id = 1, but this inference
was not performed. This has been fixed. See also Engineering case 336013.
================(Build #3961 - Engineering Case #539772)================
Installs created by the SQL Anywhere Deployment wizard would only have appeared
in the Add or Remove Programs list in Control Panel, for the users that installed
the MSI. This behaviour has been changed. The install will now appear in
the Add or Remove Programs list for all users.
================(Build #3657 - Engineering Case #494297)================
When the Unload utility dbunload is used with a 10.0 or later database, the
version of dbunload used must match the version of the server used to access
the database. If an older dbunload is used with a newer server, or vica versa,
an error is now reported. This is most likely to occur if dbunload connects
to an already-running server. The same restriction applies to the Extraction
utility dbxtract.
================(Build #3630 - Engineering Case #489098)================
If the Log Translation utility (dbtran) detected an error during execution,
the SQL file it had generated up to that point was normally deleteed to ensure
that a partial file was not used by accident. The -k command line option
has now been added to prevent the SQL file from being erased if an error
is detected. This may be useful when attempting to salvage transactions from
a damaged log.
================(Build #3548 - Engineering Case #478018)================
The certificate utilities createcert and viewcert are now available on Mac
OS X. They will be added if RSA components have been previously installed.
================(Build #4171 - Engineering Case #655157)================
If there were any missing messages, SQL Remote would have asked for a resend
after it had reached its receive polls given by the -rp option. This resend
logic could have caused the publisher to re-scan the transaction log(s) and
slow down replicating new transactions, especially on heavy load databases.
This has been changed so that when a message in a multi-part message series
(SQL Remote will generate multiple messages for a single transaction to form
a multi-part message when the transaction is too big to fit in a single message)
is missing, SQL Remote will not immediately ask for a resend, if the received
messages are not followed by any messages that contain a commit or any messages
that belong to another multi-part message series. This new logic will help
users who
need to shut down or kill SQL Remote when it is sending multi-part messages
to its subscribers.
================(Build #3835 - Engineering Case #556527)================
A new network protocol option 'http_buffer_responses' has been added. When
set to 'On', HTTP packets from MobiLink will be completely streamed into
an intermediary buffer before being processed, instead of processing the
bytes as they are read off the wire.
Syntax: http_buffer_responses = { off | on }
Protocols: HTTP, HTTPS
Default: off
Because of the extra memory overhead required, this feature should only
be used to work-around HTTP sync stability issues. In particular, the ActiveSync
proxy server for Windows Mobile devices will throw away any data that is
not read within 15 seconds after the server has closed its side of the connection.
Because MobiLink clients process the download as they receive it from MobiLink,
there is a chance they will fail to finish reading an HTTP packet within
the allotted 15 seconds causing synchronization to fail with stream error
code STREAM_ERROR_READ, when synchronizing using non-persistent HTTP. By
specifiying 'http_buffer_responses=On', the client will read each HTTP packet
in its entirety into a buffer before processing any of it, thereby beating
the 15 second timeout.
================(Build #3595 - Engineering Case #485478)================
Support has now been added to deploy native amd64/x64 ESQL and C++ applications
to 64 bit Windows platforms (64 bit XP and later). The engine is supported,
as well as static and dynamic versions of the in-process runtime library.
Encryption is also supported, although FIPS is not.
The following new files are included in the install:
ultralite\x64
ultralite\x64\uleng10.exe
ultralite\x64\ulstop.exe
ultralite\x64\mlczlib10.dll
ultralite\x64\mlcrsa10.dll
ultralite\x64\mlcecc10.dll
ultralite\x64\lib
ultralite\x64\lib\vs8
ultralite\x64\lib\vs8\ulrt.lib
ultralite\x64\lib\vs8\ulimp.lib
ultralite\x64\lib\vs8\ulrt10.dll
ultralite\x64\lib\vs8\ulbase.lib
ultralite\x64\lib\vs8\ulrsa.lib
ultralite\x64\lib\vs8\ulecc.lib
ultralite\x64\lib\vs8\ulrtc.lib
ultralite\x64\lib\vs8\ulimpc.lib
ultralite\x64\lib\vs8\ulrtc10.dll
================(Build #3505 - Engineering Case #471339)================
UltraLite clients will now send their version and build number up to the
MobiLink server during synchronization. A line similar to the following
will appear in the server log:
Request from "UL 10.0.0.2862" for: ...
================(Build #3473 - Engineering Case #464158)================
A new statement, ALTER DATABASE SCHEMA FROM FILE, now allows for alteration
an UltraLite database schema. This statement replaces the 9.0.2 schema upgrade
feature that was implemented with the UpgradeSchemaFromFile() / ApplyFile()
methods from this previous release.
Because Ultralite error callback is active during the upgrade process, the
application is notified of errors during the conversion process. For example,
SQLE_CONVERSION_ERROR reports all values that could not be converted in its
parameters. Errors do not mean the process failed. The final SQL code after
the statement returns is a 130 warning in this case. These warnings describe
operations of the conversion process and do not stop the upgrade process.
Note: There is no mechanism to support the renaming of tables, columns or
publications. Also, renaming a table is processed as a DROP TABLE and CREATE
TABLE operation.
Caution: Resetting the device during the upgrade process leaves the database
unusable.
To upgrade the schema with this new statement:
1. Define a new schema by creating a SQL script of DDL statements. The character
set of the SQL script file must match the character set of the database you
want to upgrade.
The UltraLite utilities ulinit or ulunload can be used to extract the DDL
statements required for this script. Use these utilities to ensure that the
DDL statements required are syntactically correct.
- For ulunload, use the –n and –s [file] options.
- For ulinit, use the –l [file] option.
See the UltraLite Database Management and Reference documentation for details.
2. Review the script and ensure that:
- That non-DDL statements have not been included. Including non-DDL statements
does not have the expected effect.
- Words in the SQL statement are separated by spaces.
- Only one SQL statement can appear in each line.
- Comments are prepended with '--' and only occur at the start of a line.
3. Backup the existing database.
4. Run the new statement using the following syntax:
ALTER DATABASE SCHEMA FROM FILE ‘<filename>’
For example:
ALTER DATABASE SCHEMA FROM FILE 'MySchema.sql'
5. The existing database is upgraded with the new schema using the following
process:
- Both the new and existing database schemas are compared to see what
differs.
- The schema of the existing database is then altered accordingly.
- Rows that do not fit the new schema are dropped. When this occurs, a
SQLE_ROW_DROPPED_DURING_SCHEMA_UPGRADE (130) warning is raised.
For example, if a uniqueness constraint was added to a table, and there
are multiple rows with the same values, all but one row will be dropped.
Alternately, if an attempt to change a column domain causes a conversion
error, then that row will be dropped. That is, say a VARCHAR column is converted
to an INT column, if the value for a row is “ABCD”, then that row is dropped.
Lastly, if the new schema has new foreign keys where the foreign row doesn't
have a matching primary row, these rows are also dropped.
If dropping rows is not the desired behavior of the schema upgrade, detect
the warning and restore from backup.
================(Build #3557 - Engineering Case #480217)================
When using UltraLite database for M-Business Anywhere, MIMEList POD may be
used to display data from the database in tabular format. The new feature
is only intended to use with the MIMEList POD. Please note that it is not
supporting the full AGDBSet attributes and methods. The following code snippets
show the two ways to bind to the AGDBSet object:
Example 1: Binding UltraLite table to AGDBSet
connection = databaseManager.openConnection( openParms );
agdbSet = connection.getTableAGDBSet( "ULProduct" );
Example 2: Binding UltraLite result set to AGDBSet
statement = connection.prepareStatement( "select prod_id, prod_name,
price from ULProduct order by price" );
resultSet = statement.executeQuery();
agdbSet = resultSet.getAGDBSet();
================(Build #3886 - Engineering Case #568866)================
The UltraLite Initialize Database utility (ulinit) is used to create an UltraLite
database, based on information in the SQL Anywhere database that it is connected
to. If the schema being extracted from the SQL Anywhere database contained
elements that UltraLite did not support (like column datatypes or default
values), the utility would have failed. Ulinit has been changed so that by
default, it will attempt to create an UltraLite database that comes as close
as possible to the SQL Anywhere database. For example, if a column in the
SQL Anywhere database included the DEFAULT TIMESTAMP clause (a default that
UltraLite does not support), a warning is generated and a default of CURRENT
TIMESTAMP is generated instead. In particular, if a default in the SQL Anywhere
database is not supported in the UltraLite database, the default value is
ignored and creation continues. This enhancement was made because, in some
cases, it’s possible the SQL Anywhere tables cannot be modified, and yet
a reasonable UltraLite alternative is available. The ulinit utility also
now has a –f switch that can be used to make the utility fail if the exact
schema does not match (in other words, the old behavior is given and the
utility will fail).
This fix also addressed a problem where warnings were emitted into the SQL
file if the ulinit utility was run with –l.
================(Build #3616 - Engineering Case #486877)================
When generating the download stream, in very rare circumstances, it was possible
for MobiLink to have incorrectly translated a string if the remote database
used a multi-byte character set. It was likely that the ending byte(s) of
one string would end up at the start of the next string in the same row.
This problem has now been fixed.
================(Build #4216 - Engineering Case #667907)================
When creating a server message store for QAnywhere in Sybase Central, the
process could have failed with the error message "Can't find MobiLink
setup scripts". It has now been fixed.
Note, this problem occurred only on Linux systems.
================(Build #4165 - Engineering Case #653058)================
When creating a rule with a "Custom" schedule type, the schedule
that was saved could have been incorrect if "Run rule every" was
turned off in the "Schedule Editor" window. The rule was saved
such that it was run every 10 minutes. This has been fixed.
================(Build #4163 - Engineering Case #652577)================
Sybase Central could have crashed while the property sheet for a client message
store was open, if adding a new property was started, but then the action
was cancelled. This has been fixed.
================(Build #4099 - Engineering Case #635336)================
The MobiLink Server Log File Viewer would have shown empty user names and
remote IDs in its "Synchronizations" and "Details" panels
when running on a non-English Solaris, Mac OS X, or French Linux system,
and Sybase Central was set up to run in that non-English language. This
has been fixed.
================(Build #4075 - Engineering Case #630073)================
The contents of the combobox in the "Schedule Editor" window could
have been truncated on some systems, depending on which font was being used
by Sybase Central. This has been fixed.
================(Build #4060 - Engineering Case #626459)================
A UDP Gateway's property sheet would have shown the default destination port
as -1. This has been corrected so that the correct value of 5001 is now shown.
================(Build #4057 - Engineering Case #625625)================
Sybase Central would have generated an error when attempting to create a
notifier, gateway or carrier with any of the following characters in its
name: '[', ']', '^', '%', '_'. A similar error would have occurred when attempting
to rename a notifier, gateway or carrier and any of these characters were
used in the new name. This has been fixed.
================(Build #4052 - Engineering Case #624574)================
Attempting to connect to a newly created message store at the end of the
Client Store wizard could have failed there already was network server running
the computer and its "-gd ALL" option was not used. This has been
fixed.
================(Build #4046 - Engineering Case #622889)================
When entering a file location in the Deploy Synchronization Model wizard,
typing in a file name that did not include the folder would have resulted
in an error when clicking Next. This has been fixed. A workaround is to specify
the folder.
================(Build #3993 - Engineering Case #606888)================
When redeploying a synchronization model to a SQL Anywhere remote database
using the wizard initialized with the last settings, the extended options
for the SQL Anywhere client could have been corrupted. This has been fixed.
================(Build #3951 - Engineering Case #587246)================
If a synchronization model was created that contained mappings with errors,
and then the mappings were deleted or disableds, the sync model could still
not have been deployed. The workarounds to this were to either recreate or
enable the mapping, or manually edit the sync model file and remove the scripts
with errors. This has been fixed.
================(Build #3894 - Engineering Case #558915)================
When deploying a Synchronization Model to file, any characters in .SQL files
that are not supported by the OS console code page would be changed to a
substitution character, even though the character would have been displayed
correctly in the MobiLink plug-in. This has been fixed so that .SQL files
now use UTF-8 character encoding. The generated .bat or .sh file is still
written using the console code page, since it must run in a console, but
the UTF-8 character encoding is now specified when the Interactive SQL utility
is invoked in the .bat or .sh file.
================(Build #3891 - Engineering Case #571282)================
When creating a synchronization model, if a custom download subset was choosen,
without specifying one or more tables to join, then the download_cursor events
would not have been generated. Instead errors like the following would have
appeared as comments in the Events editor:
/*
* ERROR: Unexpected error encountered while generating event.
* Error for: RHS of #set statement is null. Context will not be modified.
table-scripts\download_cursor.vm
* [line 59, column 8]
*/
This problem only happened in the New Synchronization Model wizard, not
when custom download subset was enabled in the Mappings editor. The problem
has been fixed for new synchronization models.
To work around the problem, in the Mappings editor change the Download Subset
(Dnld. Sub.) to None and then back to Custom, then switch back to the Events
editor.
================(Build #3890 - Engineering Case #570923)================
If a database error occurred when trying to install or update the MobiLink
System Setup, the error message would have included the SQL statement that
was being executed, which could have lead to the message box being too large
for the screen. This has been fixed. Now the SQL statement is only shown
when the Details are shown.
================(Build #3853 - Engineering Case #559653)================
When connected to an authenicated SQL Anywhere database from the MobiLink
plug-in in Sybase Central using the "Generic ODBC DSN" option,
the connection would have been read-only. This has been fixed.
================(Build #3830 - Engineering Case #555007)================
Sybase Central could have crashed when attempting to change the visible columns
(via View -> Choose Columns...), or column widths, while the MobiLink
11 node was selected in the tree. In addition, when in Model mode, the list
of columns under the View -> Sort menu would sometimes not have contained
all the displayed columns when the MobiLink 11 node was selected in the tree.
Both of these issues have now been fixed.
================(Build #3782 - Engineering Case #546869)================
The property sheet for connectors contained a "Transmission Rules"
page. This was incorrect because connectors do not have transmission rules;
they have delivery conditions. As a result, that page has been replaced with
a new "Delivery Conditions" page in which the single delivery condition
for the connector can be typed.
================(Build #3756 - Engineering Case #542239)================
If the Admin Mode Connection Script wizard was used to create event scripts
for handle_UploadData and handle_DownloadData events, they would get an "unknown
event" error when syncing. The problem was that the event scripts were
created with the names "handle_uploaddata" and "handle_downloaddata"
(note the differences in case). This has been fixed.
================(Build #3722 - Engineering Case #535973)================
Changes made to property values on the "Client Properties" page,
in a server message store's property window, would not have been saved if
the client was "(Default)". This has been corrected so that they
are now saved correctly.
Also, if connecting using a QAnywhere connection profile was not possible,
Sybase Central would have crashed rather than reporting the error. This has
been corrected as well.
================(Build #3717 - Engineering Case #534320)================
Sybase Central could have crashed while using the QAnywhere plugin, if the
connection to a server message store was unexpectedly lost. This has been
fixed.
================(Build #3712 - Engineering Case #481976)================
When creating a new synchronization model for an existing remote database,
the column order may not have been correct for upload_fetch or upload_fetch_column_conflict
events. This has now been fixed. To fix existing synchronization models (after
installing this fix), each synchronizing table must be set to 'Not Synchronized',
the model saved, and then set back to their previous synchronization settings.
================(Build #3707 - Engineering Case #532452)================
The changes for Engineering case 530534 (which was a followup fix to Engineering
case 491400) were incomplete, resulting in the Overview marquee not updating
when zoomed out with the marquee at the leftmost position. This has been
fixed.
================(Build #3686 - Engineering Case #482703)================
The installed version of the MobiLink system setup could not have been found
by the Sybase Central MobiLink Plug-in for Microsoft SQL Server when the
default_schema was different than the connected user. This has been fixed
so that when checking schema with a Microsoft SQL Server consolidated database,
the default_schema is now used.
Note, a work around is to make the current user the owner of the MobiLink
system setup.
================(Build #3684 - Engineering Case #499301)================
When using the New Remote Tables command to add a table to a remote schema
in a synchronization model, if the consolidated table had columns matching
the timestamp column for timestamp-based download, or logical delete column
for logical deletes, then an invalid column mapping would have been created.
This would have caused script generation errors. This has been fixed. A work
around would be to create a new sync model.
================(Build #3678 - Engineering Case #496106)================
When deploying a Synchronization Model to a Microsoft SQL Server database,
in which the name of the table owners were different then the current user's
username, an error would have occurred. This has been fixed.
================(Build #3662 - Engineering Case #495225)================
When editing a synchronization model, if one or more rows in the column mapping
editor were marked for deletion, attempting to revert changes to the model
could have caused Sybase Central to crash. The same problem could have occurred
when attempting to select another item in the tree and answering "No"
to the "Do you want to save changes?" dialog. This has now been
fixed.
================(Build #3650 - Engineering Case #492960)================
Server-initiated synchronization requires the SendDownloadAck extended option
to be enabled, but this would not be enabled for a SQL Anywhere remote database
that had been setup for SIS through deployment from a synchronization model.
This has been fixed. The workaround for this is to enable SendDownloadAck
on the remote advanced options page when deploying.
================(Build #3619 - Engineering Case #487721)================
When IMAP, POP3 or LDAP authentication was enabled for a synchronization
model, the generated authenticate_user event would have used the incorrect
case for the class name, and the generated MobiLink server command line would
not have enabled Java scripts with mlsupport.jar in the class path. Both
problems have been fixed. The workaround is to manually fix the script and
command line.
================(Build #3615 - Engineering Case #486574)================
When deploying a synchronization model, statements to create triggers did
not specify the owner (or schema) for the trigger, so permission problems
and invalid triggers could have resulted when deploying as a different user
than the table owner. This has been fixed so that the owner, or schema, is
now specified in the generated SQL for creating and dropping triggers for
Oracle, Microsoft SQL Server, ASE and DB2 consolidated databases. In SQL
Anywhere databases, a trigger is always owned by the same owner as the table,
so the problem did not occur.
A workaround is to deploy to a SQL file and manually edit the SQL.
================(Build #3603 - Engineering Case #485380)================
In the Create Synchronization Model wizard, if 'Download Subset by User or
Remote ID' used a column in the same table, it would only have been enabled
for tables where the column was also synchronized, and was a string type.
This has been fixed. Now the column must only exist in the consolidated table.
Note that the chosen column's type should be able to be implicitly compared
with a string, or errors may occur when downloading with the generated download
script.
================(Build #3594 - Engineering Case #484295)================
If a synchronization model was used to create timestamp-based downloads with
an Oracle consolidated database, and the MobiLink server used a different
timezone than the consolidated database, then some data might not have been
downloaded. The problem was that the trigger generated to maintain the last-modified
time used CURRENT_TIMESTAMP, which uses the client's timezone. This has been
fixed so that the generated triggers now use SYSTIMESTAMP (which uses the
consolidated database's time zone.) A workaround is to manually change the
trigger, either in a generated SQL file or deployed in an Oracle consolidated
database.
================(Build #3583 - Engineering Case #483335)================
When entering a multi-line rule condition, it would have been saved in a
way that caused the line to appear to have been run together when the line
was next edited. This has been fixed.
================(Build #3522 - Engineering Case #473860)================
If a synchronization model was deployed with HTTPS or TLS as the stream type,
the generated batch file for starting the MobiLink server could have given
an "Unable to convert the string <string> to a numeric value"
error, because the stream parameters were incorrectly separated by commas
instead of semi colons. This has been fixed.
A workaround is to edit the generated batch file to use semicolons instead
of comma for the stream parameters by changing this line:
set STREAM="%STREAM_TYPE%(port=%PORT%,tls_type=%TLS_TYPE%,fips=%FIPS%,certificate=%CERTIFICATE%)"
to the following:
set STREAM="%STREAM_TYPE%(port=%PORT%;tls_type=%TLS_TYPE%;fips=%FIPS%;certificate=%CERTIFICATE%)"
================(Build #3510 - Engineering Case #472503)================
Deploying a synchronization model created with an exisiting UltraLite database
could have caused an error. This problem has been fixed.
================(Build #3497 - Engineering Case #469851)================
If a rule was created whose condition expression contained a newline character,
once saved to a ".qar" file, the file could not have been read
properly. Embedded newlines were not being escaped with the line continuation
character when they were written. This has ben fixed so that the condition
is now saved correctly.
================(Build #3475 - Engineering Case #465042)================
The way destination aliases are handled by the QAnywhere server changed in
version 10.0.1 in a way that the plug-in didn't handle correctly. The list
history entries for multi-addressed messages were including all of the messages
sent to the alias members. This has now been fixed so that only those history
entries whose addresses match the destination address shown in the Messages
panel are shown.
================(Build #3474 - Engineering Case #464834)================
The "Start Agent" menu item associated with a .QAA file was missing
a keyboard mnemonic. This has been correct so that it now has one.
================(Build #3474 - Engineering Case #464348)================
The following predefined variables, ias_Originator and ias_StatusTime, were
missing from the list of predefined variables listed in the Rule dialog used
when composing deletion or transmission rules. This has been fixed.
================(Build #3470 - Engineering Case #462939)================
In order to not miss conflicts, the upload_fetch and upload_fetch_column_conflict
scripts need to prevent modification of the rows it has selected before they
are updated. Previously, for SQL Anywhere consolidated databases, the upload_fetch
and upload_fetch_column_conflict scripts generated for a model with conflict
detection used the HOLDLOCK table hint. Now these scripts use the UPDLOCK
table hint. For scripts deployed to a SQL file already, replace HOLDLOCK
with WITH (UPDLOCK) in the upload_fetch and upload_fetch_column_conflict
scripts.
================(Build #4073 - Engineering Case #629597)================
The initial position of the main window for the MobiLink Monitor could have
placed the window underneath the Windows task bar. This has been fixed.
Note, this problem also affected the Interactive SQL utility, Sybase Central,
and the SQL Anywhere Console utility.
================(Build #3856 - Engineering Case #562833)================
When the MobiLink Server had a large number of synchronizations running concurrently
(in the range of 10000), a MobiLink Monitor connected to it could have become
unresponsive, and not displayed new information in a timely manner. This
has been fixed.
================(Build #3839 - Engineering Case #554383)================
In the MobiLink Monitor Details Table, if the optional column "connection_retries",
or optional columns starting with "download_" or "sync_",
were enabled, the column labels for these columns would have been misaligned
by one or two columns. A similar problem would have occurred when exporting
to a database, where that data was exported to incorrect columns in the database
tables. Both of these problems have been fixed.
================(Build #3836 - Engineering Case #556925)================
The fix for Engineering case 553312, may have prevented restarting the MobiLink
Monitor after disabling the Details Table, Utilization Graph, or Overview
panes. This has been fixed. Pane sizes are now also properly restored when
re-enabling after restarting.
================(Build #3818 - Engineering Case #553312)================
If the MobiLink Monitor's Details Table, Utilization Graph, or Overview panes
were disabled, when reenabled they might not be visible, under some circumstances,
or a program error may have occurred. Also, resizing a pane or the application,
could have produced unexpected results. These problems have been fixed. Now
when resizing the application, only the Chart pane size is changed; and when
resizing a pane, only the panes on either side of the splitter bar are affected.
================(Build #3790 - Engineering Case #548455)================
When attempting to export synchronized data to an Oracle database, the application
could have given a false positive for a table's existence, which would have
resulted in an export failure since it would not have tried to create the
table for the current user. This has been fixed.
Also, exports to Oracle previously used the Date data type. Now, for Oracle
9 or later Timestamp is used instead of Date.
================(Build #3737 - Engineering Case #539499)================
If the Overview, Details Table or Graph were disabled in the MobiLink Monitor,
closing the Monitor and restarting it would have resulted in a Java null
pointer exception. This has been fixed. A workaround is to edit the settings
file (.mlmMonitorSettings in version 10 and earlier, .mlMonitorSettings11
in version 11) to restore display of the disabled feature. For the Overview,
change ShowOverview=false to ShowOverview=true. For the Table, change ShowTable.
For the Graph, change ShowGraph.
================(Build #3731 - Engineering Case #538156)================
Long running MobiLink Monitors could have hang or crashed with a RuntimeException
"the monitor doesn't send any-order commands". This has been fixed.
================(Build #3693 - Engineering Case #530534)================
The changes made for Engineering case 491400, to correct a problem with the
marquee in the overview panel flashing excessively when connected to a MobiLink
server, introduced drawing artifacts when the horizontal scroll bar was used
to move the marquee. This has been fixed.
================(Build #3642 - Engineering Case #491400)================
The marquee in the overview panel would have flashed excessively when connected
to a MobiLink server. This has been fixed. A workaround is to drag out the
marquee to a new region, or to pause the auto scrolling.
================(Build #3553 - Engineering Case #478704)================
When connecting to a MobiLink server via HTTP or HTTPS, the Monitor sent
more HTTP or HTTPS requests than necessary. Excessive flushing caused most
requests to be shorter than they should have been. This has been fixed.
================(Build #3531 - Engineering Case #475263)================
The MobiLink Monitor could have failed with a NegativeArraySizeException.
The failure was more likely when under very heavy load. This has now been
fixed.
================(Build #3984 - Engineering Case #605417)================
The QAManagerFactory.getInstance() method of the QAnywhere .NET client would
have thrown the exception System.DllNotFoundException when the native library
qany9.dll or qany10.dll was missing. This exception may have been unexpected
by a QAnywhere application, and has now been fixed. A QAException is now
thrown in this situation, with ErrorCode 1000 (QAException.COMMON_INIT_ERROR)
and Message containing the System.DllNotFoundException.
================(Build #3715 - Engineering Case #533612)================
On a slow devices, the QAnywhere client (qaagent) would sometimes have given
the following error messages at start up: "Error registering with DBLSN
code: -1" and "Failed to start QAnywhere Agent (register with DBLsn)".
This has been fixed so that the QAnywhere client is now much more tolerant
to lengthy dblsn startup times.
================(Build #3715 - Engineering Case #533249)================
The download phase of a synchronization could have failed with a -194 error
("No primary key value for foreign key"). This was most likely
to have occurred during large synchronzations or when the database engine
is under considerable stress. This has now been fixed.
================(Build #3703 - Engineering Case #531730)================
After modifying the incremental download size of the QAnywhere Agent using
the -idl option, it would not have been possible to reset the size to the
default value of -1. Attempting to set the size to -1 would have left the
incremental download size unchanged.
This has been fixed. Now, specifying any non-positive number for the -idl
option will reset the incremental download size to -1.
================(Build #3679 - Engineering Case #496969)================
When the QAnywhere Agent was running on a device that was not connected to
a network, each time a QAnywhere application queued a message the CPU usage
increased slightly. This has been fixed so that now, when the device is
not connected to a network, queueing a message uses about the same amount
of CPU regardless of whether or not the QAnywhere Agent is running. Moreover,
the required CPU usage stays constant as messages are queued.
================(Build #3661 - Engineering Case #494356)================
A QAnywhere .NET application could have hung if a QAManager API method was
interrupted by an exception in one thread and another thread subsequently
called a method on the QAManager that was interrupted. This has been fixed.
================(Build #3641 - Engineering Case #491104)================
Any modifications to the client message store properties, after the clients
first synchronization, would not have been propagated to the server message
store as expected.
This had been fixed.
================(Build #3641 - Engineering Case #490862)================
A QAnywhere .NET application could have crashed with a memory access vio;ation
when terminating. This was due to a race condition, which has been fixed.
================(Build #3601 - Engineering Case #485588)================
The QAnywhere Agent could have used an excessive amount of memory during
message transmission when a large number of messages were queued. This has
been fixed.
================(Build #3592 - Engineering Case #484359)================
If a user was using the QAnywhere SQL API to receive messages asynchronously
with the ml_qa_listen_queue procedure, and another user using the same message
store sent a message to this queue (ie. local messaging), the message would
not have been received.
This has been fixed.
================(Build #3592 - Engineering Case #484293)================
When using the QAnywhere Client SQL API to receive messages, they would not
have been synchronized back to the originator. This caused the messages to
remain in the "Pending" state indefinitally on the originating
client, and on the server. This has now been fixed.
================(Build #3592 - Engineering Case #484272)================
The QATransactionalManager class would have failed to re-receive a message
with large content (exceeding MAX_IN_MEMORY_MESSAGE_SIZE in size) after it
was received once and a rollback was done. This problem applied to the C#,
Java and C++ QATransactionalManager classes. This has now been fixed.
================(Build #3592 - Engineering Case #484266)================
The QAnywhere Agent and MobiLink Listener could have crashed when started
with the "@file" command line option, if "file" did not
exist. This has been fixed.
================(Build #3580 - Engineering Case #480759)================
When making a call to a QAManager with the DBF and AutoStart connection parameters,
the database server would not have been autostarted. Instead, a -101 error
"not
connected" would have been logged by the QAManager, but was not reported
back to the application. It should be noted that a QAManager will autostart
the database server when the Open method is called. The issue was that when
the database server was shut down after Open had been called, then subsequent
QAManager operations would have failed because the database connection had
been terminated, but the error codes returned to the application do not indicate
that the connection to the database was bad, thus not allowing the application
to Close and Open the QAManager to recover from the error. This has been
fixed. If the ErrorCode of a QAException is greater than or equal to 2000,
then the error means the same as ErrorCode - 1000, and also that a database
connection failure has occurred (ie. SQL Anywhere native code -101). When
a database connection error is detected, it is possible to re-Open a QAManager
without recreating it and setting its properties and message listeners again.
This is done by calling Close() then Open() again. Note that the properties
of the QAManager cannot be changed after the first Open() and subsequent
Open() calls, and must supply the same acknowledgement mode.
================(Build #3566 - Engineering Case #481022)================
A Java mobile webservices application compiled with JDK 1.5 could have failed
with an error at the server saying "'SOAP-ENV' is an undeclared namespace."
This has been fixed.
Note that because the mobile webservices runtime (iawsrt.jar) is built with
JDK 1.5, Java mobile webservices applications must be compiled with JDK 1.5
and up as well.
================(Build #3510 - Engineering Case #472920)================
A mobile webservices client application could have failed with a NullReferenceException
when processing a SOAP response that contained elements described as <any>
in the WSDL description. This has been fixed.
Note: this problem occurs when processing <row> elements contained
within <SimpleDataset> elements in result sets returned by SQL Anywhere
SOAP services.
================(Build #3501 - Engineering Case #470812)================
If the QAnywhere Agent was started with a custom policy (ie. transmission
rules) where each rule was a scheduled rule, QAnywhere would still have behaved
as though the policy was automatic. The messages put in a queue would have
been transmitted immediately, and push notifications would have resulted
in an immediate message transmission, instead of message transmissions happening
on the defined schedule. This has been fixed so that the transmission rules
are now all scheduled rules, message transmissions happen only at the scheduled
times.
================(Build #3494 - Engineering Case #469146)================
If the return type of a method in a WSDL document contained the method name,
the WSDL compiler would have generated an incorrent C# method signature for
the asynchronous method call.
For example:
public WSResult AsyncCategoryBrowseResponseResponse CategoryBrowse(CategoryBrowseRequest
request) // incorrect
should be:
public WSResult AsyncCategoryBrowse(CategoryBrowseRequest request)
// correct
A problem with the generation of asynchronous C# method signatures has been
corrected.
================(Build #3493 - Engineering Case #469002)================
When run on Windows CE, the reserved stack sizes for all threads in the QAnywhere
agent, the Listener and the MobiLink client have been changed to be as follows:
qaagent.exe: 64 KB
dblsn.exe: 64 KB
dbmlsync.exe: 128 KB
Previously, 1MB per thread of address space was reserved, while only a portion
was actually allocated, or "committed".
================(Build #3492 - Engineering Case #468735)================
If a QAnywhere message in the server database with an expiration date, was
synchronized down to a client device before it had expired, the message would
not have transitioned to an expired state, and hence would not have been
deleted by the default server delete rule. This has been fixed.
Note that a QAnywhere message in the server database with an expiration
date, that is not delivered to the client device before it expires, will
also transition to an expired state and be deleted by the default server
delete rule. This was the case before this change.
================(Build #3492 - Engineering Case #467246)================
When there were a large number of messages in the message store that were
ready for transmission (eg. 200 12KB messages), and the QAnywhere Agent was
started on a Windows Mobile 5 device, the synchronization process would have
consumed 100% CPU for a significant period of time (eg. 1 minute). Further,
if the upload failed after it had started, for whatever reason, each subsequent
synchronization would have consumed CPU for a longer period each time. This
performance problem has now been significantly alleviated so that synchronizations
with QAnywhere will not get progressively longer after upload failures.
================(Build #3489 - Engineering Case #467712)================
A .NET application would have crashed when trying to send a text message,
if the QAManager is closed. Problems with detecting that message repository
objects were open have been corrected. Now, the message: "The QAManager
is not open." (error code 1021) will be returned.
================(Build #3478 - Engineering Case #465708)================
After it has started all necessary processes, the QAnywhere Agent now prints
a line like this to the console window and log file:
I. 2007-04-11 11:21:54. There are 23 processes running
This is useful in diagnosing problems on Windows CE devices with Windows
Mobile 5, and previous OSes, because there is a fixed limit of 32 processes
that can be running at once. After that, the OS will start shutting down
applications in a not completely deterministic way.
================(Build #3474 - Engineering Case #461846)================
It was possible for QAnywhere applications to get into a state where calls
to GetQueueDepth would have taken an usually long time to return, and eventually
have thrown a QAException "error getting queue depth". If a device
crashed, or was powered off while the QAnywhere Agent was marking messages
to be uploaded, a flag was left set that GetQueueDepth checked. This problem
has been fixed by adding code to reset the flag in appropriate circumstances.
================(Build #4012 - Engineering Case #614034)================
When a 9.0.2 QAnywhere client synchronized, the MobiLink server would have
displayed the following errors:
Expecting 1 parameters in script, but only found 4: update ml_qa_global_props
set modifiers = ?, value = ? where client = ? and name = ?
Unable to open upload_update .
This has been fixed by a change to the upload_update script for the table
ml_qa_global_props, version ml_qa_2.
================(Build #3970 - Engineering Case #593120)================
The QAnywhere server could have stopped sending and receiving messages with
an Enterprise Messaging Server, through its JMS connector, when connectivity
to the EMS was interrupted and subsequently restored. This has been fixed.
================(Build #3952 - Engineering Case #585035)================
The QAnywhere server could have stopped sending/receiving messages with an
Enterprise Messaging Server, through its JMS connector, when a SQLException
was thrown by the JDBC driver. This has been fixed.
Where possible, the QAnywhere server should recover gracefully from exceptions
thrown by the JDBC driver and continue processing messages.
================(Build #3800 - Engineering Case #550080)================
In the Sybase Central QAnywhere Plugin, if when connected to a server message
store a client was created, and then the view refreshed, the newly created
client would not have been displayed. This has been fixed.
================(Build #3779 - Engineering Case #546171)================
When a delivery condition that referenced message properties was specified
for a QAnywhere connector, message transmission to the connecting messaging
system would have been disabled. This has been fixed.
================(Build #3779 - Engineering Case #546164)================
During the execution of server transmission rules, it was possible for the
QAnywhere server to repeatedly report a java.util.NoSuchElement exception,
and abort the rule execution. This has been fixed.
================(Build #3775 - Engineering Case #545690)================
When a message was sent to a destination alias, the QAnywhere Server may
not have immediately generated push notifications for some members of the
alias. This could have resulted in the server taking as long as a minute
to push notifications to clients. This has been fixed.
================(Build #3772 - Engineering Case #467274)================
When a QAnywhere application (using SQL Anywhere as the message store) queued
messages in time zone A, and then the time zone of the device was changed
to time zone B with time earlier than time zone A, the queued messages would
not have been transmitted until the time in time zone B reached the time
that the messages were queued in time zone A. This has been fixed so that
the messages queued in time zone A are now sent immediately when the device
is online in time zone B.
Note that the issue of time zone independence with QAnywhere has not been
completely addressed. All time values used in transmission rules refer to
local time. Also, the special variable ias_StatusTime, used in transmission
rules, refers to local time.
================(Build #3727 - Engineering Case #537595)================
The changes for Engineering case 534179 introduced a problem where the QAnywhere
Server's logging messages could have been output as garbled text. This has
now been corrected.
================(Build #3715 - Engineering Case #533728)================
A small window of opportunity existed in the QAnywhere server where a statement
could be closed and removed from the statement cache, just as another thread
was preparing the statement to be closed. This resulted in some operations
being performed on a closed statement, resulting in a JDBC error. This has
been fixed
================(Build #3705 - Engineering Case #531967)================
The QAnywhere Server would have throw an ObjectRepositoryException if it
was configured to use a delete rule with an empty condition clause. That
is, if a rule was given that had nothing written to the right of the equals
sign. One such rule might look like: "AUTO=" This has been fixed.
Specifying an empty condition clause now specifies that all available messages
should be deleted.
================(Build #3703 - Engineering Case #531766)================
If a JMS message bound for a QAnywhere client was missing its native address,
and no default address was specified for the JMS connector, the QAnywhere
Server would have reported a NullPointerException. This has been fixed. The
server now reports the proper error message
================(Build #3691 - Engineering Case #523757)================
The QAnywhere Server did not always report errors after processing a badly
formatted Server Management Request (SMR). The SMRs that suffered from this
problem were those that contained any XML elements that did not exactly match
those expected by the server (ie, misspelled elements, or elements not included
in the DTD), in which case the processing of the request would fail silently.
This has been fixed so that the QAnywhere server will now report an error
whenever it comes across an unrecognized XML element. The QAnywhere server
will now also validate the XML elements in a case insensitive way. As long
as the opening tag matches the closing tag, the case is ignored.
================(Build #3686 - Engineering Case #499959)================
With ASA databases the QAnywhere server caches prepared statements to avoid
re-preparing them on each statement execution. During periods of high activity
however, the server could have reported "Resource governor for 'prepared
statements' exceeded", followed by the failed execution of a SQL statement.
This problem has now been fixed.
================(Build #3618 - Engineering Case #487581)================
Initializing a scheduled transmission rule containing an "EVERY"
clause and a "START DATE" clause set to a date that had already
passed, in the QAnywhere server or
in the QAnywhere Ultralight Agent, would have caused the rule to be immediately
and repeatedly executed many times on startup. This has been fixed.
================(Build #3608 - Engineering Case #486050)================
Logs created during the nonblocking_download_ack synchronization event were
being logged with the logger source name of "ianywhere.unknown.category",
instead of the remote id name of the client being synchronized as expected.
This has been fixed.
================(Build #3590 - Engineering Case #478153)================
When the MobiLink server was running with QAnywhere messaging enabled and
a JMS connector was configured and there is a high volume of JMS traffic
between QAnywhere clients and a JMS system, the Java VM running in MobiLink
would have used a continuously increasing amount of heap memory and may eventually
have reached an out-of-memory condition. This has been fixed.
================(Build #3581 - Engineering Case #482742)================
QAnywhere documentation lists IAS_TEXT_CONTENT and IAS_BINARY_CONTENT as
constants a that can be used to refer to the two different message content
types in selector, transmission, and delete rules. However, the QAnywhere
server was recognizing the constants IAS_TEXTCONTENT and IAS_BINARYCONTENT
instead. This would have caused rules using the documented constants to not
work as desired. This has been fixed so that both constant formats are now
recognized.
================(Build #3581 - Engineering Case #482741)================
If a QAnywhere Server Management Request was used to cancel messages in a
Server Store, messages were cancelled even after they had already been downloaded
to the message recipient. This could cause consistency problems in the server
store and possibly disable message transmissions on the receipient device.
This has been fixed.
================(Build #3562 - Engineering Case #480532)================
The QAnywhere server's logging mechanism was allocating a large amount of
additional memory for each new client. This could have caused the Moblink
server to run out of memory when working with a large number of clients.
This has been fixed.
================(Build #3557 - Engineering Case #477156)================
When a large number of clients (more than 2500) had contacted the MobiLink
server, with all indicating they wished to receive push notifications, the
server would have consumed a large amount of CPU evaluating transmission
rules. This has been fixed.
NOTE (for 9.0.2 only): It is also important to run the MobiLink server
with the command line option
-sl java ( -Dianywhere.qa.db.upgradeDeliveryColumns=true )
in order to get the most out of this performance improvement. This option
causes the MobiLink server to reorganize the QAnywhere message store tables,
and add further indexes to these tables, to obtain optimal performance of
MobiLink with QAnywhere.
================(Build #3547 - Engineering Case #477294)================
There was a security issue with logging on the QAnywhere client during initialization.
This has been fixed.
================(Build #3513 - Engineering Case #472632)================
Messages with a NULL originator in the server message store, would have caused
message processing to halt. This has been fixed as follows: the QAnywhere
Agent has been changed so that messages with a NULL originator will not be
uploaded until the store ID is set on the client. The QAnywhere connector
has been changed so that if a message with NULL originator somehow gets into
the server message store, the message will be marked as unreceivable and
skipped, not halting further message processing.
================(Build #3508 - Engineering Case #471045)================
The memory usage of the MobiLink server with QAnywhere messaging enabled
would have increased by a small amount at each client synchronization. The
amount of increase was reduced by about 90% by changes made for Engineering
case 471798. This increase has now been reduced by a further 5%. While not
completely resolved, the memory increase has been significantly reduced,
and it continues to be addressed.
================(Build #3475 - Engineering Case #465712)================
When running the consolidated database on a server that uses snapshot isolation
(Oracle 10g for example), it was possible that MobiLink would have redelivered
messages to QAnywhere clients that were previously received and acknowledged.
This problem would have occurred when there was a long-running transaction
on the consolidated database, which caused the last_download_timestamp to
stay fixed at the time that the transaction began. It has now been fixed.
================(Build #4167 - Engineering Case #653930)================
The Console utility could have stopped refreshing database and/or server
properties after changing the set of properties which were displayed, even
after it was restarted. The problem was sensitive to the speed with which
properties were selected or unselected. This has been fixed.
================(Build #4152 - Engineering Case #647851)================
During HTTPS synchronizations, MobiLink clients could have crashed in MobiLink's
RSA encryption library. This has been fixed.
================(Build #4149 - Engineering Case #638242)================
In rare situations, when multiple instances of the MobiLink client (dbmlsync)
were run concurrently on the same machine, one or more of the instances may
have crashed. It was possible that this problem might also have manifest
itself as corrupt data sent to the MobiLink server, but that would have been
extremely unlikely. This behaviour has been fixed.
================(Build #4109 - Engineering Case #637333)================
When using the ADO.NET Provider with a .NET Framework 4.0 Client Profile,
Visual Studio 2010 generated some compile errors. This problem has been fixed.
================(Build #4106 - Engineering Case #636557)================
Attempting to delete properties and transmission rules from the clients defined
within a Server Message Store, could have failed either with or without an
error message. This has been fixed.
================(Build #4096 - Engineering Case #635072)================
Specifying a single, empty authentication parameter on the dbmlsync commandline,
or using a synchronization profile, would have caused dbmlsync to report
"out of memory". For example specifying the following on the commandline
would have caused the error:
-ap ""
This problem has been fixed.
Note, a workaround is to specify the parameter using a single comma. For
example -ap , This passes a single empty authentication parameter but does
not cause the "out of memory" error.
================(Build #4051 - Engineering Case #624021)================
The documentation erroneously indicated that for Windows and Windows CE,
if no trusted certificates were provided, MobiLink clients would automatically
load the certificates from the OS's trusted certificate store. This feature
has now been implemented.
================(Build #3894 - Engineering Case #572196)================
It was possible for the MobiLink client (dbmlsync) to have sent an incorrect
last download timestamp up to the MobiLink server, if dbmlsync had been running
on a schedule, and ALL of the following had occurred during the last synchronization
:
1) All of the data in the download stream had been applied, but had not
yet been committed to the remote database.
2) An SQL Error had been generated by dbmlsync before the download had been
committed. Examples of errors that could have occurred include an error occurring
in the sp_hook_dbmlsync_download_ri_violation or the sp_hook_dbmlsync_download_end
hooks, or an error occurring as dbmlsync had attempted to resolve referential
integrity issues.
3) Another hook had been defined in the remote database that would have
executed on another connection. For example, the sp_hook_dbmlsync_download_log_ri_violation
or the sp_hook_dbmlsync_all_error hooks would have executed on a separate
connection.
This problem has now been fixed, and the proper last download timestamp
is now sent up to the MobiLink server in the synchronization when this situation
occurs.
================(Build #3887 - Engineering Case #570503)================
When using secure streams and an invalid TLS handshake occured, the MobiLink
server could have waited for a full network timeout period before disconnecting.
This has been fixed. The MobiLink server will now immediately terminate the
network connection with a "handshake error" error message.
================(Build #3865 - Engineering Case #563844)================
The MobiLink client (dbmlsync) would have occationally reported the error:
Failed writing remote id file to '<filename>'
Despite the error, synchronizations would have continued successfully, and
the remote id file would have appeared on the disk in good order. This problem
has been fixed.
================(Build #3853 - Engineering Case #562027)================
When running on Sun SPARC systems, the MobiLink client (dbmlsync) would have
complained about "missing transaction log files", if there were
any offline transaction log files bigger than 2GB. This problem now been
fixed.
================(Build #3849 - Engineering Case #560943)================
The dbmlsync ActiveX component was not able to launch the dbmlsync application
properly on Windows if some or all of the dbmlsync options are given by a
file, and the dbmlsync command line contained the option @filename. This
problem has now been fixed.
================(Build #3831 - Engineering Case #555444)================
When synchronizing using HTTPS through an HTTP proxy, MobiLink clients would
have incorrectly appended the url_suffix to the HTTP CONNECT request, which
could have caused some proxies and servers to fail. This has been fixed.
================(Build #3822 - Engineering Case #580190)================
If an error had occurred while the MobiLink client (dbmlsync) was applying
a download, and there had also been referential integrity errors that dbmlsync
could not resolve, then dbmlsync would have reported that the download had
been committed, even though it had been rolled back. This has been corrected
so that dbmlsync now correctly reports that the download was rolled back.
================(Build #3821 - Engineering Case #554271)================
If the MobiLink client (dbmlsync) was run against a database with a character
set that was different from the operating system's character set, then errors
generated by the database would have been garbled when displayed in the dbmlsync
log. This has been corrected so that these messages will now be displayed
correctly.
================(Build #3773 - Engineering Case #544956)================
If MobiLink Client was performing a synchronization, and the status of the
last synchronization was unknown, it was possible for the MobiLink Server
to have reported that the synchronization had started twice. The MobiLink
Log with no extra verbosity might contain the following messages:
Request from "Dbmlsync Version 10.0.1.3750" for: remote ID:
rem1, user name: rem1, version: v1
Request from "Dbmlsync Version 10.0.1.3750" for: remote ID:
rem1, user name: rem1, version: v1
Synchronization complete
This problem has been fixed.
================(Build #3755 - Engineering Case #542185)================
When the properties of the visual or non-visual version of the Dbmlsync ActiveX
Component were examined from a development environment, it would incorrectly
have been described as "iAnywhere Solutions Dbmlsync ActiveX Component
9.0.1". The string has now been changed to properly reflect the true
version of the component.
================(Build #3741 - Engineering Case #540407)================
In the dbmlsync log file it was possible for a message to occasionally be
omitted, or for two messages to be mixed together. For example, a line like
the following might occur in the log:
E. 2008-08-06 16:24:34. Timed out trying to readTimed out trying to
read 7 bytes.
This has been fixed.
================(Build #3734 - Engineering Case #538936)================
The Dbmlsync Integration Component could have crashed during a call to the
Run method. As well, the OS would sometimes have detected a heap error. This
has been fixed.
================(Build #3715 - Engineering Case #533746)================
If a download was interrupted by a network failure, it was possible for the
MobiLink client (dbmlsync) to fail to create a restartable download file.
Furthermore, dbmlsync would have displayed a network error to the dbmlsync
log, but then attempted to apply the partial download, which would almost
certainly have failed. This has been fixed so that dbmlsync now creates a
restartable download file and does not attempt to apply the partial download.
================(Build #3708 - Engineering Case #546742)================
The MobiLink client (dbmlsync) could have crashed when reporting certain
TLS or HTTPS errors. Certain TLS errors could have caused a null pointer
dereference during creation of the error message string. This has now been
corrected.
================(Build #3628 - Engineering Case #488862)================
The MobiLink client could have reported internal error 1003 during , and
would have most likely occurred when the increment size was quite large,
or if the server was slow to apply the upload. This has now been corrected.
================(Build #3620 - Engineering Case #486539)================
A synchronization could have failed with the error:
- Could not find subscription id for subscription of <ML user> to
<publication>.
or
- SQL statement failed: (-101) Not connected to a database
if all the following were true:
1) the synchronization was scheduled and the time before the next scheduled
sync was more than a 2 minutes
2) for some row in the syssync table, "log_sent" was greater than
"progress". (This occurs when dbmlsync sends an upload to the
MobiLink server, but does not receive an ack/nack to indicate that the upload
was applied to the consolidated database or not)
3) hovering was enabled
This problem has now been fixed.
================(Build #3619 - Engineering Case #487690)================
Normally, the MobiLink client dbmlsync ignores the server side state on the
first synchronization of a subscription, although there was a problem where
dbmlsync might have respected the server side state on a first synchronization,
if an exchange was performed with the server to confirm the progress offsets
of other subscriptions that had previously synchronized. As a result of
this, data could have been lost and synchronizations could have failed with
the error "Progress offset mismatch, resending upload from consolidated
database's progress offset" being reported twice. This has been fixed.
================(Build #3619 - Engineering Case #487689)================
If the MobiLink client dbmlsync did not receive an ack/nack from the server
after sending an upload it has no way of knowing whether the upload was successfully
applied to the server. The best way to resolve this situation is to perform
an 'extra exchange' with the server before the next synchronization to request
the status of the upload, but dbmlsync did not perform this extra exchange
after an unacknowledged upload that occurred during the first synchronization
of a subscription. This would not have resulted in any data loss, but might
have increased the time required for the next synchronization as it might
cause two uploads to be built and sent to the MobiLink server. This has been
corrected so that an extra exchange is now performed in this case to eliminate
the possibility of sending two uploads.
================(Build #3619 - Engineering Case #487687)================
If the MobiLink client dbmlsync failed during the brief time between when
an upload was completed and when state information in the database was updated,
then the server would not have been queried at the start of the next sync
to determine if the upload was successfully applied to the consolidated database.
This problem would have occurred extremely rarely, and in most cases would
have been harmless. The result was simply that the next synchronization
took a little longer because an upload was built and uploaded, then rejected
by the server, and a new correct upload was built and uploaded. However
if the failure occurred on a subscription's first synchronization, it could
have resulted in operations being uploaded to the server twice, which would
usually have caused the synchronization to fail with server side errors.
This has been fixed so that the syssync table is updated prior to the end
of the upload. As a result an extra exchange may occur when the end of the
upload was not sent, but the client should never fail to do an extra exchange
when it is required.
================(Build #3618 - Engineering Case #487516)================
If a database had been initialized with the -b option (blank padding of strings
for comparisons), then the log scanning would not have read the delete_old_logs
database option properly. The log scanning code would always have used the
default value of 'Off', regardless of the value set in the database. This
problem has now been fixed.
================(Build #3610 - Engineering Case #486446)================
When running on a slow network, the MobiLink client dbmlsync could have reported
'Internal Error (1003)'. This problem has been corrected.
================(Build #3608 - Engineering Case #485878)================
If a remote database synchronized an NCHAR column that was included in multiple
publications, or if a database initialized with a multi-byte character set
synchronized a CHAR column that was included in multiple publications, then
dbmlsync would have incorrectly reported that a column subset mismatch existed
on the column in question. This has been fixed.
================(Build #3574 - Engineering Case #481905)================
When the MobiLink client dbmlsync was run against a database created using
Turkish settings, it would fail shortly after startup with the message:
SQL statement failed: (-141) Table 'sysarticlecol' not found.
This problem has been fixed.
================(Build #3553 - Engineering Case #475760)================
If multiple publications were being synchronized separately, and the MobiLink
client (dbmlsync) was running in hover mode, but the SendTriggers extended
option for the publications were not all the same, it was possible for dbmlsync
to not have synchronized trigger actions when they should have been, or to
have synchronized trigger actions when they should not have been. This problem
has now been fixed, but introduces a behaviour change. When multiple publications
are synchronized in hover mode, if the SendTriggers option changes from one
synchronization to the next, a complete rescan of the transaction log is
now executed to ensure that the proper SendTriggers option is used. This
could result in synchronizations taking longer than before, with the benefit
that the data in the synchronization is correct.
================(Build #3544 - Engineering Case #477290)================
If logscan polling was disabled (using the -p command line option or the
DisablePolling extended option) and a progress offset mismatch occurred,
then the MobiLink client would have sent a new upload using the remote progress
value. If the remote progress was greater than the consolidated progress,
then this could have resulted in data loss, as operations on the remote could
have been lost. If the remote progress was less than the consolidated progress,
this could have resulted in server side errors applying the upload, as the
same operation might have been uploaded more than once. This problem is now
fixed. When a progress mismatch occurs a new upload will be sent from the
consolidated progress unless the -r, -ra or -rb options are used.
================(Build #3532 - Engineering Case #473724)================
When both the graphical and non-graphical Dbmlsync Integration Components
(ActiveX components) were unregistered, they left the following key in the
registry:
HKEY_CLASSES_ROOT\TypeLib\{A385EA65-7B23-4DC3-A444-2E759FF30B14}
This key is now removed when both components have been unregistered.
================(Build #3509 - Engineering Case #471958)================
If the primary key of a table being synchronized, was not defined as the
initial column or columns of the table, it was possible for the MobiLink
Server to crash while processing the download_delete_cursor for that table.
It was more likely for the MobiLink Server to crash if the options -b or
-vr were specified on the MobiLink server command line. The problem has
now been fixed.
================(Build #3503 - Engineering Case #471007)================
When the MobiLink client (dbmlsync)was run with table locking enabled, this
is the default and is the case unless the extended option locktables is set
to off, the expected behaviour is for synchronizations to fail unless dbmlsync
can obtain locks on the tables being synchronized. A problem that would have
allowed synchronizations to continue when dbmlsync failed to obtain the locks,
has now been fixed.
================(Build #3486 - Engineering Case #466996)================
It was possible for a dbmlsync synchronization to fail with the following
error messages:
... Communication error occurred while sending data to the MobiLink server
... Internal error (???!s).
... Communication error occurred while sending data to the MobiLink server
... Unspecified communication error
This problem was most likely to have occurred when a slow network was being
used. With a slow network, dbmlsync could have become blocked on a network
write. which prevented the sending of a liveness message. To correct this
problem, dbmlsync will no longer attempt to send liveness messages when it
is blocked on a write.
A possible workaround for this problem would be to use the timeout communication
parameter to increase the liveness timeout.
================(Build #3474 - Engineering Case #463668)================
A memory leak would have occurred in the MobiLink client when synchronizing
BIT strings. This has been fixed.
================(Build #4170 - Engineering Case #650719)================
After a failed download, an attempt to restart the download may have failed
and reported a "Protocol Error" or a read failure. This has been
fixed.
================(Build #3984 - Engineering Case #488676)================
The HTTP option 'buffer_size' was limited to 64000 (64KB). On slow networks
and/or large uploads or downloads, the overhead due to HTTP could have been
significant. The 'buffer_size' option is now limited to 1000000000 (1GB).
When using slow networks to perform HTTP or HTTPS synchronizations, tests
could be done with larger values for 'buffer_size' to see if synchronization
times improve.
For versions 11.0.1 and up, this change only applies to the -xo option of
the MobiLink server. The -x option already allows larger values.
================(Build #3891 - Engineering Case #571465)================
If a network error occurred in the MobiLink Monitor, the SQL Anywhere Monitor,
QAnywhere, or the Notifier, there could have been garbage characters trailing
the error string.
For example:
"The server monitor was unable to contact the MobiLink server. The
host 'mlstress02' is still available. Error: Timed out trying to read 128
bytes.rWms"
This has been fixed.
================(Build #3854 - Engineering Case #562083)================
The MobiLink server could have silently ignored bad HTTP requests. In particular,
subsequent requests received by MobiLink server B, for a session started
in MobiLink server A, would have been silently ignored. The error was particularly
likely to appear if an HTTP intermediary was misbehaving and sending different
HTTP requests for the same session to different MobiLink servers. This has
been fixed, and this case will now issue an error.
================(Build #3817 - Engineering Case #553300)================
Network error messages in the MobiLink monitor, the SA Monitor, the Notifier
or the QAnywhere server could have been garbled on non-English machines.
This has been fixed.
================(Build #3788 - Engineering Case #548144)================
If a network error occurred during a read from the stream, some MobiLink
Java clients could have hung with 100% CPU utilization. This has been fixed.
The MobiLink Monitor, the SQL Anywhere Monitor, the Notifier and QAnywhere
are all affected by this.
================(Build #3788 - Engineering Case #548033)================
When synchronizing through a third-party server or proxy and using TLS or
HTTPS, the sync could have failed with the stream error code STREAM_ERROR_READ
and system error code 4099 (hex 1003). This has now been fixed.
================(Build #3662 - Engineering Case #495146)================
Synchronizations with may have failed or hung, particularly on slow or low-quality
networks. This has been fixed.
================(Build #3662 - Engineering Case #495145)================
HTTP or HTTPS synchronizations may have failed or hung, particularly on slow
or low-quality networks. This has been fixed.
================(Build #3652 - Engineering Case #493337)================
MobiLink clients could have failed to parse Set-Cookie HTTP headers sent
by web servers and would have returned the error STREAM_ERROR_HTTP_HEADER_PARSE_ERROR.
This has been fixed.
================(Build #3645 - Engineering Case #489903)================
Some HTTP intermediaries add more information to the HTTP User-Agent header.
This was causing failed synchronizations, and has now been fixed.
Note that any intermediary that removes the information put into the User-Agent
by the MobiLink client will cause synchronizations to fail.
================(Build #3630 - Engineering Case #489258)================
The HTTP synchronization parameter buffer_size was not always respected,
particularly when using zlib compression, which could have caused upload
performancto degrade for large uploads. This has been fixed. Also the default
values for buffer_size have been increased as follows:
Palm - 4K
CE - 16K
all other platforms - 64K
and the maximum value for buffer_size has been increased from 64K to 1G.
================(Build #3576 - Engineering Case #482124)================
The UltraLite and MobiLink security DLLs/shared objects had entry points
that were inconsistent with those used by the SQL Anywhere database server.
This has been corrected. The DLLs/shared objects and the binaries that load
them, must be at the noted build number or later, or else the DLL/shared
object will fail to load and an error (indicating missing/invalid DLL/shared
object) will be issued.
================(Build #3534 - Engineering Case #475962)================
Clients connecting via the -xo option were not able to saturate the number
of database workers specified by the -w option. This has been corrected so
that the number of -xo clients that can concurrently connect is equal to
the number of database worker threads.
================(Build #3490 - Engineering Case #468347)================
MobiLink clients would never have timed-out a connection if the timeout synchronization
parameter was set to zero. This has been fixed so that connections will
now timeout after the maximum timeout period of 10 minutes if the server
has not responded in this period.
================(Build #3483 - Engineering Case #466812)================
HTTP synchronizations through third party web servers, or proxies that use
cookies, could have failed with stream error STREAM_ERROR_HTTP_HEADER_PARSE_ERROR.
Also, if the server used the "Set-Cookie2" header, the client would
never have sent the cookie back up to the server. These problems have now
been fixed.
================(Build #3479 - Engineering Case #465947)================
MobiLink clients that use TLS or HTTPS would have crashed if they were not
able to load the appropriate TLS stream dlls (mlcrsa10.dll, mlcecc10.dll,
mlcrsafips10.dll and sbgse2.dll). This has been fixed. They will now report
the error "Failed to load library x" (STREAM_ERROR_LOAD_LIBRARY_FAILURE).
================(Build #4184 - Engineering Case #658453)================
When using MobiLink synchronization and timestamp-based downloads with an
Oracle Real Application Cluster (RAC) system, there is a chance of missing
rows to be downloaded if the clocks of the Oracle cluster nodes differ by
more than the time elapsed between the MobiLink server fetching the next
last download timestamp and fetching the rows to be downloaded. This problem
is unlikely on a RAC system with synchronized node clocks, but the likelihood
increases with larger node clock differences. A workaround is to create either
a modify_next_last_download_timestamp or modify_last_download_timestamp script
to subtract the maximum node clock difference.
Note that at least since version 10i, Oracle has recommended using Network
Time Protocol (NTP) to synchronize the clocks on all nodes in a cluster,
and NTP typically runs by default on Unix and Linux. With cluster nodes properly
configured to use NTP, their clocks should all be within 200 microseconds
to 10 milliseconds (depending on the proximity of the NTP server). Since
Windows Server 2003, the Windows Time Service implements the NTP version
3 protocol and it runs by default. Also, as of version 11gR2, Oracle Clusterware
includes the Oracle Cluster Time Synchronization Service (CTSS) to either
monitor clock synchronization or, if neither NTP or Windows Time Service
is running, it will actively maintain clock synchronization. However CTSS
and Windows Time Service are less accurate than NTP,
To avoid missing rows when Oracle RAC node clocks differ by up to one second
more than the time between fetching the next_last_download_timestamp and
the rows to be downloaded, now the MobiLink server will subtract one second
from the next_last_download_timestamp fetched from the consolidated database,
if
1) the Oracle account used by the MobiLink server has execute permission
for SYS.DBMS_UTILITY,
2) the consolidated database is an Oracle RAC system,
and (only for MobiLink version 12.0.0 and up)
3) there is no generate_next_last_download_timestamp script.
For Oracle RAC node clocks that may differ by greater amounts, you can avoid
the problem by defining a generate_next_last_download_timestamp, modify_next_last_download_timestamp
or modify_last_download_timestamp script to compensate for the maximum node
clock difference.
================(Build #4173 - Engineering Case #655780)================
The method MLResultSet.getBigDecimal(L/java/lang/String;) unnecessarily threw
a 'method not supported' exception. This has been fixed.
================(Build #4164 - Engineering Case #652609)================
When using Oracle as a back-end database, synchronizations may have failed
with the error ORA-08207. This has been fixed.
================(Build #4151 - Engineering Case #647345)================
The data for an upload stream may not have been fully uploaded into the consolidated
database if the consolidated database was running on Microsoft SQL Server
and errors occurred in the upload. For this to have occurred, the connection
property, XACT_ABORT must have been set 'ON' in the consolidated database,
and the handle_error script must have returned 1000 (skip the row and continue
processing). This problem has now been fixed.
================(Build #4142 - Engineering Case #646269)================
The MobiLink Server could have crashed under heavy load if the client load
was a mix of old (prior to version 10) and new (version 10 or later) clients.
THis has now been fixed.
A work around is to specify the -cn switch with a value of the twice the
value of -w plus 1. Eg. if using the default value of -w (5), specify -cn
11. Version 12 is not affected as it no longer supports old clients.
================(Build #4135 - Engineering Case #642568)================
If a synchronization failed with a protocol error, some later synchronization
could have failed with a translator or right truncation error. It was also
possible that instead of failing, the later sync could have made use of the
failed syncs to, for example, insert it into the consolidated. These issues
have been fixed.
================(Build #4120 - Engineering Case #639825)================
The 32-bit authentication value sent to MobiLink clients was being truncated
to 16-bits. This has been fixed. In order to use this fix, both clients
and server must be updated. If the use of this fix is not required, it is
not necessary to upgrade both the clients and server.
================(Build #4116 - Engineering Case #637309)================
The MobiLink server could have crashed at the end of a version 9 or earlier
synchronization request, or while processing the upload stream from a version
10 or later synchronization request.
Also, the MobiLink server was not able to distinguish between empty strings
in varchar(8) or smaller columns, binary(16) or smaller values made of only
0s, the integer 0, and null values when filtering the download. This could
have caused rows to be incorrectly filtered from the download. For example,
if an empty string was uploaded in a row, and the only difference between
a downloaded row and that uploaded row was that the empty string became null,
the row would have been ommitted from the download and the remote would not
have received that update.
These issues have been fixed.
================(Build #4109 - Engineering Case #636715)================
The iAS ODBC driver for Oracle would have returned a wrong value for a parameter
indicator through the ODBC API, SQLBindparameter( ..., c_type, ..., param_type,
..., &indicator ), if it was called with the following parameters:
1) the C data type of the parameter was SQL_C_WCHAR or SQL_C_CHAR
2) the type of parameter was SQL_PARAM_INPUT_OUTPUT, but the corresponding
parameter used in the SQL statement was input-only
Due to this problem, the data for the user-defined named parameters in the
MobiLink server may have been truncated after each use when the named parameter
was defined as {ml u.varname} and the parameter used in the SQL statement
was input-only. This has now been fixed.
================(Build #4106 - Engineering Case #637169)================
Starting with Visual Studio 2010, class libraries built with with the default
project settings will no longer work with a MobiLink server running with
its default CLR version. There are two workarounds for this:
1) Change the target Framework of the VS project.
When creating a new project, there is a drop down above the list of project
types that contains ".NET Framework 4"; change this to ".NET
Framework 2.0", ".NET Framework 3.0", or ".NET Framework
3.5". If a version 4 project has already been created, change the target
framework by right-clicking on the project in the Solution Explorer, and
selecting "Properties" in the context menu. The target framework
can be set on the "Application" tab. When changing the target framework,
there is no longer access to .NET 4.0 features; to use newer features, use
the next workaround.
2) Tell the MobiLink server to load the version 4 framework.
To do this, add -clrVersion=v4.0.30319 to the -sl dnet options. The
"30319" is the specific build number of the framework installed
and may be different on your machine. To find the correct version, look
in the .NET install location, which is typically "c:\WINDOWS\Microsoft.NET\Framework\".
The clrVersion to specify is the v4.0 sub-directory there.
================(Build #4101 - Engineering Case #634921)================
When a ping synchronization took place, the MobiLink server needed to check
the status of the connection to the consolidated database, and would have
done so by executing a query to count the number of rows in the ml_scripts_modified
table, but MobiLink would not have committed or rolled back this query when
the ping synchronization was complete, leaving the transaction open. If the
consolidated database used snapshot isolation, this open transaction would
have resulted in the MobiLink server sending an older last modified timestamp
than was necessary to remote databases until this transaction was closed,
which would not happen until another non-ping synchronization re-used the
same connection in the connection pool. While this did not result in any
data loss, it could result in the same rows being downloaded to the remote
databases multiple times. The MobiLink server no longer leaves this transaction
open after a ping synchronization.
================(Build #4091 - Engineering Case #622866)================
If the MobiLink Server had been started with the "-xo http" or
"-xo https" command line options to accept http[s] synchronizations
from version 9 or lower MobiLink clients, and the port that was listening
for synchronizations received an HTTP request from an HTTP client other than
an UltraLite or SQL Anywhere MobiLink client (for example, a web browser),
the MobiLink Server would have reported an HTTP error to the HTTP client,
posted an error to the ML Server log, but would not have freed the worker
thread in the ML Server. Multiple requests from other HTTP clients would
have eventually resulted in no threads available to handle additional synchronizations.
This has now been fixed, and the worker thread is returned to the pool of
available worker threads after the error is reported.
================(Build #4082 - Engineering Case #632040)================
On 64-bit systems, it was possible for the JDBC driver to crash if some statement
attributes were queried. This has now been fixed.
================(Build #4082 - Engineering Case #631119)================
If the empty string was passed into an SQLNativeSQL or SQLPrepare function,
it was possible for the iAS Oracle ODBC Driver to have crashed. This has
been fixed. The SQLPrepare function will now return the error "Invalid
string or buffer length", and the SQLNativeSQL function will now simply
set the out parameters to the empty string as well.
================(Build #4070 - Engineering Case #629058)================
A Java VM running inside the MobiLink server could have run out of memory
if the server had many requests with different script versions and some sync
scripts made calls to DBConnectionContext.getConnection(). This has been
fixed.
================(Build #4070 - Engineering Case #562039)================
The start times for synchronizations reported by the MobiLink Monitor and
the MobiLink Server, when used with the -vm option, could have been incorrect
if the MobiLink Server had been running for several days. Also, the output
for the -vm option could have been incorrect if a request used non-blocking
download acks, and phase durations reported by -vm option could have been
slightly different than phase times reported by the MobiLink Monitor. These
issues have now been fixed.
================(Build #4016 - Engineering Case #614417)================
The MobiLink server could have crashed when processing a synchronization
request from a client, if the client was older than version 10 and was syncing
UUID columns. This has been fixed.
================(Build #4001 - Engineering Case #611414)================
When using the iAS Oracle ODBC driver, attempting to execute an INSERT, UPDATE,
or DELETE statement with SQLExecDirect immediately after executing a SELECT
statement with the same statement handle, would have failed with the following
error message:
ORA-24333: zero iteration count
This problem is fixed now.
================(Build #4001 - Engineering Case #611373)================
The MobiLink server could have occasionally given the following error:
A downloaded value for table 'table_name' (column #column_number) was either
too big or invalid for the remote schema type
and then aborted the synchronization, when a client was trying to download
data from a table that contained NCHAR, NVARCHAR or LONG NVARCHAR columns,
even when NCHAR, NVARCHAR or LONG NVARCHAR data was uploaded in a previous
synchronization. This problem has now been fixed.
================(Build #3984 - Engineering Case #605651)================
The MobiLink server would have thrown the exception IllegalCastException
when assigning the null reference to the Value property of an IDataParameter
when using the MobiLink Direct Row API to download data. This has been fixed.
A work around is to assign DBNull.Value instead.
================(Build #3968 - Engineering Case #591002)================
The changes for Engineering case 582782 could have caused the MobiLink server
to be much slower for small sync than servers without the fix. This problem
would have occurred when a consolidated database was running on an Oracle
RAC. The slowness is in the Oracle server: fetching the minimum starting
time of the open transactions from gv$transaction can take as much as a couple
of seconds and it is much slower than from v$transaction. This has now been
corrected.
================(Build #3943 - Engineering Case #585456)================
The queue lengths in the Utilization Graph of the MobiLink Monitor could
have been incorrect and the RAW_TCP_STAGE_LEN, STREAM_STAGE_LEN, HEARTBEAT_STAGE_LEN,
CMD_PROCESSOR_STAGE_LEN metrics printed by the -ppv option could also have
been incorrect. These issues have now been corrected.
================(Build #3943 - Engineering Case #585258)================
The MobiLink server would not have shown any script contents, if the scripts
were written in Java or .NET, even when the verbose option (-vc) was specified
in its command line. This problem is fixed now.
================(Build #3939 - Engineering Case #584310)================
Download performance of the MobiLink server has been improved for tables
that contain no BLOB columns, when the consolidated database is running on
Microsoft SQL Server.
================(Build #3931 - Engineering Case #582782)================
When a consolidated database was running on an Oracle RAC, the MobiLink server
could have skipped rows being downloaded to remote databases in a time-based
download. In the following situations, rows modified in the Oracle database
could be missed from the download stream:
1) the MobiLink server connected to one node on an Oracle RAC;
2) another application, A connected to another node on the same Oracle RAC;
3) application A modifies rows in a synchronization table without commit;
4) a MobiLink client, R1 issues a synchronization request that contains
a time-based download from this table;
5) the MobiLink server completes the synchronization request successfully;
6) A commits its transaction;
Then the rows modified by application A would not be down loaded to remote
R1. This problem has now been fixed.
Note, as a result of this change, the Oracle account used by the MobiLink
server must now have permission for the GV_$TRANSACTION Oracle system view
instead of V_$TRANSACTION. Only SYS can grant this access. The Oracle syntax
for granting this access is:
grant select on SYS.GV_$TRANSACTION to <user-name>
================(Build #3930 - Engineering Case #582589)================
If 10 MobiLink Monitors or SQL Anywhere Monitors connected to the same MobiLink
server, and the last to connect then disconnected, then the MobiLink server
could have crashed. This has been fixed.
================(Build #3922 - Engineering Case #580222)================
If the MobiLink Server had been started with "-s 1" command line
option, to indicate that the server should always apply changes to the consolidated
database one row at a time, then the MobiLink Server would still have executed
SAVEPOINT commands. SAVEPOINT commands are not needed when running in single
row mode, so they are no longer executed when the MobiLink Server had been
started with "-s 1".
================(Build #3891 - Engineering Case #567906)================
The MobiLink server could have crashed when multiple -x options were specified
on the command line, with at least one being HTTP and another being HTTPS.
This could have happened, for example, when a VPN connection was created
or dropped in the middle of a non-persistent HTTP/HTTPS synchronization,
and the network intermediaries were set up such that one path resulted in
HTTP and the other resulted in HTTPS. This has been fixed.
================(Build #3888 - Engineering Case #570656)================
If a MobiLink synchronization script included the two characters "ui"
inside an {ml ...} structure for named parameters, and the "ui"
characters were not part of a named parameter, then MobiLink would have incorrectly
replaced the "ui" with a question mark when it sent the command
to the consolidated database. For example, the following script would have
had no problem, since the "ui" in this case was part of the named
parameter "build":
INSERT INTO t1(pk,build) VALUES ( {ml r.pk}, {ml r.build})
However, the following script would have failed, because the "ui"
in the column list for the insert would have been replaced:
{ml INSERT INTO t1(pk,build) VALUES ( r.pk, r.build )}
This has now been fixed.
================(Build #3872 - Engineering Case #565651)================
If an application executed a query like "select ... for t where c =
v', where c was a char or varchar column, v was as variable of type nchar
or nvarchar, and t was a proxy table to a remote table in Microsoft SQL Server,
then the query would have failed with SQL Server reporting the error "The
data types varchar and ntext are incompatible in the equal to operator."
This problem has now been corrected.
================(Build #3864 - Engineering Case #564829)================
When the MobiLink server was listening for HTTP and/or HTTPS requests, and
a load balancer or any other utility (eg. the RSOE) performed a simple TCP/IP
connect, then an immediate close without sending any bytes, the MobiLink
server would have taken four minutes to time out the socket. If too many
such connections happened in a short time, the MobiLink server could have
run out of sockets earlier than necessary. This has been fixed.
================(Build #3860 - Engineering Case #563592)================
When run on Windows systems, both the MobiLink server and the Relay Server
Outbound Enabler (RSOE) could have held onto sockets for longer than necessary.
This would have caused both to use up sockets faster than necessary, possibly
exhausting system socket limits. With the RSOE, needless timeouts could also
have occurred. This behaviour was particularly evident with non-persistent
HTTP/HTTPS connections, and appeared to be very much OS and machine dependent.
This has been fixed.
================(Build #3858 - Engineering Case #563405)================
The MobiLink server could have crashed if a IDataReader returned by MLUploadData
was not closed. If the server didn't crash it would have leaked memory. The
crash would have occurred at a random time after the synchronization completed.
This has been fixed.
Note that enclosing the use of the IDataReader in a 'using' block will automatically
close it.
================(Build #3857 - Engineering Case #563404)================
Synchronizations could have failed with protocol errors when some, but not
all, of the parameters for a delete command in the .NET irect row API were
set to DBNull.Value or null. This has been fixed so that an exception will
be thrown when attempting to execute the command.
================(Build #3840 - Engineering Case #558232)================
The MobiLink Monitor has a default filter which highlights failed synchronizations
in red. Failed synchronizations logged by the MobiLink Server were not being
shown in the Monitor in red as the server was telling the Monitor that every
sync was successful. This has been fixed.
================(Build #3833 - Engineering Case #555985)================
The MobiLink server would given an error message that the classpath was too
long if the Java classpath given in the -sl java option was longer than about
3000 characters. This restriction has been removed.
================(Build #3832 - Engineering Case #555616)================
he MobiLink server was allocating more memory than necessary and thus wasting
memory. The amount wasted was approximately equal to 13% of the -cm option
value. This has now been fixed.
================(Build #3831 - Engineering Case #555034)================
Push notifications over a SYNC gateway may have stopped working after the
listener reconnected. The was timing dependent and was more likely to have
occurred when there was a proxy, a redirector, or a relay server being used
between the listener and the SYNC gateway of the MobiLink server. The listener
may have reconnected to the SYNC gateway when the IP address of the remote
device had been changed, when the QAAgent registered with the listener on
startup, or a communication error had occurred. This problem has now been
fixed.
================(Build #3821 - Engineering Case #553482)================
Prior to executing the "begin_connection_autocommit" script, the
MobiLink server will temporarily turn on auto-commit on the ODBC connection,
due to restrictions with some consolidated databases. On ASE consolidated
databases, this will generate a warning (SQL_SUCCESS_WITH_INFO) in the ODBC
driver:
"AutoCommit option has changed to true. All pending statements on this
transaction (if any) are committed."
This warning was being generated whether the begin_connection_autocommit
script was defined or was absent. This has been fixed. The server will now
only turn on auto-commit when the script is defined and will only execute
the script if it is defined. If the script is defined, it is still possible
to see this warning logged in the MobiLink server console log. This is expected
behaviour.
================(Build #3819 - Engineering Case #553748)================
When using the iAS ODBC driver for Oracle, calling SQLColAttribute with an
attribute code of SQL_DESC_TYPE_NAME would not have returned the type names
of columns. This has now been fixed.
================(Build #3818 - Engineering Case #553387)================
The following fixes have been made to support MobiLink in DB2 Mainframe Compatibility
Mode
- The ml_pt_script table did not have an explicit ROWID column. This was
required for the CLOB column. Some D2M deployments support an implicit ROWID,
and some do not, inluding compatibility mode. Fixed by adding an explicit
ROWID column.
- The SQL used for JCL-based CREATE PROCEDURE statements included the SQL
body of the stored procedure. This worked most of the time, but not under
compatibility mode. Now, the external-procedure form of CREATE PROCEDURE
is used, which doesn't include the body (which of course isn't necessary
because it is in the *.xmit files).
- The SQL used to create the SQL-based D2M stored procedures didn't escape
single quotes. This wasn't noticed because D2M treats most unquoted text
between single quotes as a string, so it just worked. Single quotes are now
escaped in the procedure body, inside the call to DSNTPSMP.
- The D2M ml_add_pt_script stored procedure didn't work under compatibility
mode, for several reasons:
1) it had a VARCHAR( 256 ) parameter (the flags for the SQL Passthrough
script) that caused conversion problems inside the procedure when run under
compatibility mode. This parameter has been changed to VARCHAR( 255 ).
2) it referenced a Unicode string, which isn't supported under compatibility
mode. This has been fixed by replacing it with a non-Unicode string.
3) it used a form of the SIGNAL statement that isn't supported under
compatibility mode. This has been corrected.
================(Build #3814 - Engineering Case #552321)================
If an error or warning occurred when commiting, the MobiLink server would
not have reported any error or warning messages. If it was an error, the
MobiLink server would have just failed the synchronization request without
giving any reasons. This problem is fixed.
================(Build #3811 - Engineering Case #539627)================
Some lines printed to the MobiLink server log would not have caused LogListeners
to fire. In particular, warning 10082, "MobiLink server has swapped
data pages to disk out:<...> concurrently used pages:<...>",
never triggered LogListeners. This has been fixed.
================(Build #3808 - Engineering Case #551858)================
If a remote server used character length semantics for a string column (e.g.
a SQLAnywhere remote with an nchar column, or an UltraLiteJ remote with a
char column), and the column on the remote database was smaller than the
column in the consolidated database, then the MobiLink server could have
failed to report the truncation. The server was already counting the number
of characters in the the column coming out the consolidated, but it wasn't
checking the length against the domain info given by the remote. This has
now been fixed
================(Build #3808 - Engineering Case #551813)================
Messages printed to the MobiLink server log could have been mangled on systems
with non-English character sets. This would have happened most often on errors
from QAnywhere or the Notifier. This has now been fixed.
================(Build #3806 - Engineering Case #547906)================
If a column in the consolidated database was larger than the corresponding
column in the remote database, then the MobiLink server may have crashed
when synchronizing. This has been fixed so that the sync will now abort with
the error -10038.
================(Build #3794 - Engineering Case #548580)================
Synchronizing a UNIQUEIDENTIFIER field in a remote database to Oracle via
MobiLink would have resulted in a 32 character UUID, followed by a NULL character
and three other characters (typically also NULL). When sending GUIDs to Oracle,
MobiLink was removing the hyphens to match the GUIDs generated by the SYS_GUID()
function in Oracle, but was not trimming the ODBC bind length to account
for the hyphen removal, thus resulting in 4 extra bytes in the string representation
of the UUID in Oracle. These four extra characters have now been removed.
================(Build #3788 - Engineering Case #548032)================
The MobiLink server would have printed the warning, "[10082] MobiLink
server has swapped data pages to disk", after it had swapped 5000 pages
to disk, or about 20MB of row data. This has been changes so that it now
prints this message after the first time the server must swap to disk. This
should make it easier to diagnose performance problems when -cm is set slightly
too small.
================(Build #3787 - Engineering Case #547730)================
If a corrupted UltraLite or SQL Anywhere remote client synchronized with
a MobiLink server, it was possible for protocol errors to be generated. When
this occurred, the MobiLink server console log would have shown the text:
I. <1> failed reading command with id:%d and handle:%d
I. <1> Synchronization complete
This has been fixed. Now, the error message "[-10001] Protocol Error:
400" will be displayed and a synchronization error will be reported.
================(Build #3787 - Engineering Case #547716)================
Attempting to add a blob to the download stream when using the MobiLink Direct
Row API and the MLPreparedStatement.setBytes() method, would have failed.
The method would have returned the error "Value is out of range for
conversion" if the length of the byte array was larger than 65536 bytes.
This problem has now been fixed.
================(Build #3786 - Engineering Case #547206)================
Non-persistent HTTPS synchronizations could sometimes fail with stream error
STREAM_ERROR_WRITE and a system error code of 10053. This has been fixed.
================(Build #3779 - Engineering Case #546256)================
When connected to a DB2 Mainframe (D2M) consolidated database, the MobiLink
server could have held locks across COMMITs, causing increased contention
and sometimes resulting in deadlock or timeout errors. This has been fixed.
================(Build #3779 - Engineering Case #546173)================
When uploading timestamp data with the .NET Direct Row API, an exception
could have been thrown. Even if an exception wasn't thrown, the fractional
part of the timestamp would have been incorrect. When downloading timestamps
with the .NET Direct Row api, values would have been incorrect by a few seconds.
Both of these problems have now been fixed.
================(Build #3779 - Engineering Case #545762)================
The MobiLink system stored procedures for DB2 Mainframe were created with
a default isolation level of RR (Repeatable Read = Serializable) instead
of CS (Cursor Stability = Read Committed). This has been fixed.
================(Build #3772 - Engineering Case #544763)================
The -nc option, which limits the number of concurrent sockets opened by MobiLink
server, wasn't feasible to use with non-persistent HTTP/HTTPS, because sockets
that could have been continuations of valid synchronizations might have been
rejected. The -sm option has been improved to provide similar functionality
to -nc when used with non-persistent HTTP/HTTPS. Furthermore, the MobiLink
server should usually have provided HTTP error 503 (Service Unavailable)
to the remote when the -sm limit was reached and sessions were kicked out.
If the -nc limit was reached, however, the error would instead have been
a socket error -- usually with a system code about being unable to connect,
but experience has shown the system code can vary.
Note, to limit the number of concurrent synchronizations for non-persistent
HTTP/HTTPS, the -nc option should be set significantly higher than -sm.
The greater the difference between -sm and -nc, the more likely (but never
guaranteed) the 503 error will be sent to the remote instead of a socket
error.
================(Build #3769 - Engineering Case #544943)================
The MobiLink server could have hung, or crashed, when using encrypted streams.
The behaviour was highly dependent on both timing and data size. This has
now been fixed.
================(Build #3769 - Engineering Case #544321)================
An HTTPS synchronization through a proxy server that required authentication
would have failed. When using HTTPS through a proxy server, the client first
sends a CONNECT HTTP request to establish a channel through the proxy. Unfortunately,
authentication challenges was only active for GET and POST requests. This
has been corrected so that CONNECT requests are now active as well.
================(Build #3746 - Engineering Case #541075)================
If the MobiLink Server was processing an invalid upload stream, it was possible
for the MobiLink Server to have crashed. The MobiLink Server will now fail
the synchronization.
================(Build #3746 - Engineering Case #540200)================
When running the MobiLink server with minimal verbosity, and using the MobiLink
Listener (dblsn), the message "Disconnected from consolidated database"
would have appeared in the server log. This has been corrected. The connection
used by dblsn will now be reused by the next dblsn client.
================(Build #3739 - Engineering Case #539812)================
The MobiLink server name given by the -zs command line option was not shown
in the title bar of the MobiLink server window. This problem is corrected.
================(Build #3737 - Engineering Case #539309)================
If the MobiLink Server had been started with the -nba+ switch, it was possible
for the MobiLink Server to have crashed if a non-blocking download acknowledgment
was received from a remote database, and the MobiLink Server had lost all
its connections with the consolidated database. The MobiLink server will
now properly report that all connections to the consolidated database have
been lost.
================(Build #3735 - Engineering Case #537962)================
If an error occurred when executing a Java or .NET synchronization script,
and operations had been performed on the connection returned from the DBConnectionContext.getConnection
method, it was possible for those operations to have been committed to the
consolidated database. In order for this to have occurred, the Java or .NET
synchronization script would have to have been executed before any SQL scripts
were executed in the transaction by the MobiLink Server. As a workaround,
a SQL synchronization script that does nothing could be defined that executes
at the start of the transaction. For example, define a begin_upload connection
event that calls a stored procedure, that does nothing, to prevent a problem
in the handle_uploadData event resulting in operations performed in the handle_uploadData
event from being accidentally committed. This problem has now been fixed.
================(Build #3731 - Engineering Case #538347)================
If the MobiLink server was started unsuccessfully (i.e invalid parameter,
unable to connect to database, invalid stream specified), and no logging
option was specified (-o or -ot), then the server would have displayed an
error dialog and waited for the shutdown button to be pressed. After waiting
about a minute for the manual shutdown, the server could then have crashed.
This has been fixed.
Note, this problem should only have occurred on systems where a GUI was
used.
================(Build #3730 - Engineering Case #537917)================
A download_delete_cursor script that returned NULL and non-NULL values for
the primary key columns of a synchronized table would have made the MobiLink
client behave erratically: the client could have deleted rows that should
not have been deleted, or could have displayed the following error message:
SQL statement failed: (100) Row not found
This problem has been fixed. The MobiLink server will now complain if a
download_delete_cursor returns NULL as well as non-NULL values for the primary
key columns of a synchronized table, and will then abort the synchronization.
The download_delete_cursor script must return NULL values for all the primary
key columns (the MobiLink client will delete all
the rows from the corresponding sync table) or non-NULL values (the client
will delete specific rows specified by the primary
key values).
================(Build #3730 - Engineering Case #534179)================
Java messages could have been corrupted on operating systems with non-English
character sets. Character set conversion was not being done correctly. This
has been fixed
================(Build #3727 - Engineering Case #538954)================
Server Initiated Synchronizations, using persistent connections, didn't scale
well as it required persistent resource per connected client on the backend
server, as well as onintermediaries like the Redirector or Relay Server.
This limitation may have caused large deployments to require a server farm,
which is not supported until version 11.x. Now with this change, an alternative
solution is provided based on light weight polling. This alternative is based
on a new caching notifier in the MobiLink server, and a client API for polling
the notification (MobiLink Lightweight Polling API). The caching notifier
refreshes the current set of notifications by executing a request_cursor
against the database at a setable frequency. The cache is exposed for clients
to poll without involving database access, nor authentication via the same
MobiLink server port.
Caching notifier
A caching notifier is a notifier with a request_cursor that return a result
set with 1, 2 or 3 columns. The first column is the key of the notification,
the optional second column is the subject of the notification and the optional
third column is the content of the notification. A caching notifier doesn't
need gateways or tracking information in order to push notifications down
to clients. Clients are expected to initiate connection and poll at the
cache refresh frequency. Users may define multiple caching notifiers for
different business logic, and they can co-exist with other regular or caching
notifiers.
MLLP API
Development resources are found under the following location
%SQLANY10%\MobiLink\ListenerSDK\windows\src\mllplib.h
%SQLANY10%\MobiLink\ListenerSDK\windows\x86\mllplib.dll
%SQLANY10%\MobiLink\ListenerSDK\windows\x86\mllplib.exp
%SQLANY10%\MobiLink\ListenerSDK\windows\x86\mllplib.lib (import library
for the dll)
MLLP client will dynamically load various ML client stream library.
Example MLLP client app
Please see %SQLANYSH10%\samples\MobiLink\SIS_CarDealer_LP2
================(Build #3727 - Engineering Case #537609)================
If a synchronization contained tables for which no rows were uploaded or
downloaded, the MobiLink server would have allocated more memory than was
neccessary. This has been fixed so that the memory usage will be proportional
to the number of columns in empty tables multiplied by the number of upload
transactions. In tests with 50 concurrent syncs of 200 empty tables with
6 columns per table, the peak memory used by MobiLink server dropped by 178MB,
or about 3kB per column. Systems that synchronize many empty tables, or use
transactional uploads (i.e. the -tu option on dbmlsync), will see improved
performance with this fix.
================(Build #3727 - Engineering Case #536746)================
When an encryption library could not found, the MobiLink server would have
issued a misleading message indicating corruption:
Invalid or corrupt network interface library: xxxxx
This has been corrected so that now the MobiLink server issues the message:
Failed to load library xxxxx
he documentation for the load library message indicates that a license may
be required, which is appropriate in this case.
================(Build #3715 - Engineering Case #533805)================
If a consolidated database was running on an Oracle 9i or later server, the
MobiLink server could have sent clients a next_last_download_time (a timestamp
value used to generate a download in the next synchronization) that was earlier
than the last_download_time (a timestamp value used to generate the download
in the current synchronization). This problem could have caused a MobiLink
client to complain when it was trying to apply the downloaded file. This
problem has now been fixed.
================(Build #3715 - Engineering Case #533804)================
When the MobiLink server was under heavy load, the Mobilink monitor may have
crashed, hung or disconnect from the Mobilink server. This has now been
fixed.
================(Build #3678 - Engineering Case #498056)================
Scripts can be created that use user-defined parameters that are denoted
by the {ml u.parm} syntax. Some ODBC drivers have a problem though with
how the MobiLink server translates the SQL statement which would pass the
parameter with IN/OUT attributes. This can now be overcome by using the
new notation {ml ui.parm}. The MobiLink server will now pass this parameter
with IN attributes.
A workaround previous to this new feature would be to code the script as
a stored procedure call.
================(Build #3671 - Engineering Case #493219)================
When using non-persistent HTTP, the length of the end_synchronization phase
in the Monitor could have been shown as taking a long time (at least equal
to the connection timeout interval), even though the sync successfully completed
much earlier. The strange display made it hard to interpret what was going
on. This has been fixed.
================(Build #3668 - Engineering Case #495980)================
In order to optimize database access, the MobiLink scripts can be considered
"read-only" when the -f option is specified. In this mode, the
ml_global version of scripts would have been checked for changes before each
synchronization. This has been corrected so that
the check is only done once at startup.
================(Build #3654 - Engineering Case #493708)================
Older MobiLink clients (version 8 and 9) may have failed to synchronize with
an "Out of memory" error. This error should have been reported
as "unknown character set". This has been corrected. The character
translation mechanism can no longer translate characters from DECKanji or
Turkish8 (possibly others). There is no workaround for this issue.
================(Build #3650 - Engineering Case #492788)================
Trying to use the named parameter ODBC_State with the report_odbc_error script
would have resulted in an error, with ODBC_State being reported as an invalid
system parameter. This has been fixed.
================(Build #3650 - Engineering Case #490209)================
An error would have been reported when some valid options were entered in
the dbmlsync option dialog on CE. The options affected included -q -Q -qc
-o -ot -os and -wc. This has been fixed.
================(Build #3647 - Engineering Case #492197)================
A client which sent malformed communications protocol to the MobiLink server,
could have casused the server to crash. This has been fixed.
================(Build #3637 - Engineering Case #490590)================
Synchronizations using TLS, HTTP, or compression could have failed. Also,
mlfiletransfer, dblsn, and all components connecting to the MobiLink server
could have failed. The failure manifestation was highly data-dependent, but
the most likely error was a protocol error. Synchronizations from older (ie.
versions 8 & 9) clients were not affected by this problem. In the extremely
unlikely event that the lost bytes go unnoticed by the other end of the network
connection, or internally in MobiLink server, then there might be lost data.
For example, in a row operation, a sequence of bytes in the middle of a VARCHAR
column value may have been removed. This has been fixed.
================(Build #3635 - Engineering Case #490481)================
When attempting to shut down a MobiLink server using mlstop, as well as pressing
'q' on UNIX or clicking on the 'Shut down' button on Windows simultaneously,
then the MobiLink server could have crashed. This problem was due to a race
condition, which has now been corrected.
================(Build #3632 - Engineering Case #489597)================
The MobiLink server would have reported an incorrect error if the server
was running in blocking ack mode, but an event for non blocking ack (-nba+)
mode has been defined. The error reported was: "There is no download
data script defined for table:.. ". This has been corrected.
================(Build #3631 - Engineering Case #489100)================
The MobiLink server must hold all table data needed for currently active
synchronizations. When the total concurrent amount of table data exceeded
the server's cache memory (-cm option) by more than 4200MB the server could
have failed. This has been fixed.
================(Build #3630 - Engineering Case #489266)================
The MobiLink server could have silently failed a ping request from a 9.0.2
or earlier MobiLink client if the client's command line contained any options
of upload_only and/or download_only. This problem has now been fixed.
================(Build #3623 - Engineering Case #488272)================
When the MobiLink server can not store all the data needed for all the synchronizations
in the cache memory (-cm flag), it must swap some to a temporary file. This
data could have been written to the file more often than needed. This has
now been fixed.
================(Build #3623 - Engineering Case #488271)================
When the MobiLink server displayed warnings about the amount of memory that
was swapped to disk, the number reported for "concurrent pages"
was the maximum number of concurrent pages for current instance of the server.
This created the impression that the page usage always increased. This has
been corrected so that this number is now the number of concurrent pages
in use at the time of the warning.
================(Build #3617 - Engineering Case #487339)================
If an older client (version 8.0 or 9.0) synchronized against the MobiLink
server in a way that a second synchronization was attempted before the first
finished (the client was terminated before the server was finished), the
server would have allowed the second synchronization to proceed. This has
been corrected so that subsequent synchronizations will fail until the first
has competed. This problem does not apply to version 10.0 clients, as their
new protocol detects and handles this situation in a different manner.
================(Build #3611 - Engineering Case #486579)================
A MobiLink client, synchronizing via HTTP, that set the connection timeout
to less than the default 240 seconds, could have been disconnected by the
MobiLink server with a connection timeout error. This has been fixed.
================(Build #3609 - Engineering Case #486224)================
Some HTTP intermediaries can inject a redundant User-Agent HTTP header, resulting
in synchronizations failing. This is been fixed so that as long as the first
User-Agent is the one the MobiLink server expects, it will allow the redundant
header.
================(Build #3609 - Engineering Case #486223)================
Some HTTP intermediaries can convert non-chunked HTTP or HTTPS requests into
chunked requests. The MobiLink server currently cannot accept chunked requests,
and would have crashed when it received them. This has been fixed so it will
now fail the synchronization with the error "unknown transfer encoding"
if it receives chunked requests.
Note that this change only applies to the -x option, and not to the -xo
option.
================(Build #3601 - Engineering Case #485242)================
When using the Dbmlsync Integration Component, an exception could have occurred,
or corrupt data could have been retrieved, if the UploadRow event or the
DownloadRow event was enabled. For this to have occurred, the handler for
the above event must have called the ColumnValue method on the IRowTransfer
object more than once with the same index, and the index used must have corresponded
to a column containing a string or BLOB value. This problem has now been
fixed.
A work around for this problem would be to ensure that the ColumnValue method
is not called more than once for a single index by storing the value retrieved
by the first call in a variable and working with that value.
================(Build #3600 - Engineering Case #485285)================
Some memory could have been leaked by the MobiLink server when using non-persistent
HTTP or HTTPS (persistent=0 at the client). The size of the leak was proportional
to the number of HTTP GET requests, so large downloads would have caused
greater leaks. A small leak could have occurred as well if a communication
error occurred. The impact of the leaked memory could have included failed
synchronizations and/or MobiLink server crashes. This has now been fixed.
================(Build #3600 - Engineering Case #485276)================
When a consolidated database was running on a DB2 or DB2 mainframe server,
the MobiLink server, using the native IBM DB2 ODBC driver, may not have retried
uploads when deadlocks occurred. This problem has now been fixed.
================(Build #3588 - Engineering Case #482520)================
An upload that contained invalid or corrupt table data could have crashed
the MobiLink server. The MobiLink server will now correctly fail the synchronization
when it encounters invalid data..
================(Build #3583 - Engineering Case #483230)================
Ping synchronizations from a MobiLink client would have failed if the MobiLink
Server had been connected to a consolidated database where the authenticate_parameters
event existed. This has now been fixed.
================(Build #3575 - Engineering Case #478491)================
The MobiLink server would have crashed on startup when run on AIX 5.3 TL06
systems. This has now been resolved by having the installer turn off the
execute bit on all shared objects on AIX so that the libraries will not be
preload into the OS library cache.
================(Build #3570 - Engineering Case #481521)================
When synchronizing with HTTP or HTTPS, the MobiLink server could have caused
too many HTTP request/response cycles. The extra exchanges and extra bytes
on the wire would have made synchronizations take longer. This problem was
timing-dependent, and its likelyhood was inversely proportional to the round-trip
time between the client and server. This has been fixed.
================(Build #3557 - Engineering Case #472648)================
The MobiLink server could have entered into an infinity loop and generated
a very large output file, or have even crashed, if the execution of an upload_fetch,
download_cursor, or download_delete_cursor script caused an error, or if
the number of columns in the result set generated by any of these cursors
did not match the number of columns defined
in the corresponding remote table, and the handle_error or handle_odbc_error
script returned 1000 when these errors occurred. This problem has been fixed.
The MobiLink server will now abort the synchronization if any of these unrecoverable
errors
occur during synchronization. The errors in the user-defined scripts must
be fixed.
================(Build #3554 - Engineering Case #479237)================
The MobiLink server could have exhausted cache memory when multiple version
8.0 or 9.0 MobiLink clients were concurrently synchronizing large numbers
of tables. The server could have crashed, or shut down with the message "Unable
to allocate memory.". THis has now been fixed.
Increasing the amount of cache memory using the -cm server option is a work
around for this problem.
================(Build #3554 - Engineering Case #476095)================
The MobiLink server could leak memory when synchronizing and using the Listener.
This includes using the Notifier alone, or with QAnywhere. This is now fixed.
================(Build #3553 - Engineering Case #471931)================
The MobiLink server could have crashed when calling a stored procedure which
generated error messages that were longer than 256 characters. This has
been fixed.
================(Build #3550 - Engineering Case #475008)================
When using a load balancer that tested the MobiLink server availability by
probing the server network port, the server may have shutdown. This shutdown
was due to the server believing that it could no longer accept new connections.
This has been corrected.
================(Build #3535 - Engineering Case #472778)================
Secure-streams startup errors, for example when a bogus certificate identity
password was used, would not have prevented the server from starting. The
error would only have been detected on the first synchronization. This may
have resulted in a server crash, depending on the error. This has been fixed.
================(Build #3534 - Engineering Case #474873)================
Synchronizing with a version 8 client, could have caused the MobiLink server
to crash. This would have usually happened after a log offset mismatch. This
has been fixed.
================(Build #3527 - Engineering Case #474631)================
In the Windows Explorer, the files mlrsa_tls10.dll, mlrsa_tls_fips10.dll
and mlecc_tls10.dll would have appeared versionless when their properties
were inspected. This has been corrected.
================(Build #3517 - Engineering Case #545516)================
The MobiLink server now requires the ASE native ODBC driver, version 15.0.0.320,
which can be retrieved from the Sybase Software Developer Kit - 15 ESD #14,
for consolidated databases running on ASE 12.5 or ASE 15.0 database servers.
This is required due to a bug in the older versions of the ASE native ODBC
driver, that has now been fixed.
================(Build #3516 - Engineering Case #470202)================
Named parameters found in scripts would have been parsed and incorrectly
substituted for, when found in comments and quoted strings. This could have
caused parameters to be passed in the wrong order, or an error message to
be generated, when the number of parameters did not match what was expected.
This has been fixed.
Note, the following forms of comments are recognized:
-- (two hyphens)
// (two forward slashes)
/* */
The first two forms cause the script text to be ignored until the end of
a line.
The last form causes all text between the "/*" and "*/ to
be ignored. This form of comment cannot be nested.
Any other type of vendor specific comment will not be recognized and should
not be used to comment out references to a named parameter.
================(Build #3515 - Engineering Case #466074)================
Certain x.509 server certificates would have been erroneously rejected by
the client during the TLS handshake, causing the connection to fail. Certificates
generated by 10.0 gencert were particularly likely to be rejected. This problem
has been resolved by upgrading to newer versions of the Certicom TLS libraries.
================(Build #3507 - Engineering Case #471798)================
The MobiLink server could have leaked 2K bytes of memory per synchronization
if compressed streams were being used (ie. the "compression" client
option was not "none"). The leak depended directly on how the first
uploaded bytes of the synchronization flow on the network between the client
and server, so the leak was somewhat random from client's point of view.
The synchronization was not affected by the leak. This has now been fixed.
================(Build #3504 - Engineering Case #472238)================
The "iAnywhere Solutions 10 - Oracle" ODBC driver is now supported
on Windows x64 systems.
================(Build #3504 - Engineering Case #471149)================
When run on Windows 2000, the MobiLink Server was unable to determine the
IP address of the remote client, and thus was unable to ignore a request
when the stream option 'ignore' was specified. This has been fixed.
================(Build #3490 - Engineering Case #468129)================
When an error occurs during synchronization, the MobiLink server should display
the full details of the error, including the MobiLInk user name, remote ID,
script version, row values (if available), etc. However, if no -vr command
line option was specified, the row values were not displayed in the error
context by the MobiLink server. This has been corrected.
================(Build #3488 - Engineering Case #467441)================
The built-in MobiLink authentication classes that authenticate to external
LDAP, POP3 and IMAP servers were unable to read a property if the ScriptVersion
of the property was defined as 'ml_global'. It is now possible to define
both the authenticate_user connection script and MobiLink properties needed
for authentication using the special 'ml_global' connection script.
Note that the ml_global property can be over-ridden with a script version
specific property, similar to the way connection scripts work.
================(Build #3484 - Engineering Case #466696)================
When a MobiLink server was started with the Java VM loaded, an error related
to network issues could have caused the server to crash when shutting down.
This has been fixed.
================(Build #3482 - Engineering Case #466685)================
When synchronization tables on an ASE database server were created with the
'datarows' locking scheme, the MobiLink server could have silently skipped
rows that were inserted by other connections without a commit. The ASE server
doesn't block any other connections that are trying to query the rows from
a table that were created with 'datarows' locking scheme, even when there
are uncommitted inserts for this table. The MobiLink server now works around
this behaviour properly to ensure that no rows are skipped.
By default, the MobiLink server now queries the minimum transaction starting
time from master..systransactions and then sends this timestamp to the client
as the last download timestamp. In the next synchronization for this client,
the MobiLink server will use this timestamp as the last download time for
download. In order to get the starting time, the user ID the MobiLink server
uses to connect to an ASE server must have select permission and the master..systransactions
table. If the user does not have proper permissions, the MobiLink server
will present a warning message and get the download time from the ASE function
getdate(), reverting to the old behaviour where rows could be missed in the
download. With this change, it's now possible that the MobiLink server may
send duplicate data to clients, if there are any open transactions that modified
any tables in the synchronization database or any databases on the ASE database
server when MobiLink server is doing a download. Although the clients are
able to handle duplicate data, this behavior may reduce MobiLink server performance.
The MobiLink server now includes a new command line switch so you can the
behaviour of the MobiLink server with respect to tables with the 'datarows'
locking scheme. The -dr switch can be used to tell MobiLink that none of
the synchronizing tables use the 'datarows' locking scheme. The -dt switch
on the MobiLink server has also been enhanced to include Adaptive Server
Enterprise in addition to Microsoft SQL Server. The -dt switch can be used
to force MobiLink to detect transactions only within the current database
when determining the last download time that will be sent to the remote database.
The -dt switch should be used if all the synchronizing tables are located
in a single database, as this could reduce duplicate data sent by the MobiLink
server to the clients.
================(Build #3475 - Engineering Case #464889)================
If the -sl option (set Java options) was used more than once on the MobiLink
server command line, an error such as "unrecognized argument" could
have occurred. This has now been corrected.
For example "mlsrv10 -sl java ( opt1 ) -sl java (opt2 ) ..." now
correctly parses as "mlsrv10 -sl java( opt1 opt2 ) ..."
================(Build #3475 - Engineering Case #464885)================
When the MobiLink server was configured to support synchronization requests
from version 9 clients using the -xo option, the following error could have
occurred at shutdown: "[-10117] Stream Error: Unable to open a 'tcpip'
network connection. An error occurred during shutdown.". This has been
fixed so that the error no longer occurs.
================(Build #3470 - Engineering Case #456446)================
The MobiLink server could have hung when run on Unix systems. This has been
fixed.
================(Build #3420 - Engineering Case #466887)================
Using encrypted streams could have resulted in failed synchronizations, particularly
on Mac systems. This has been fixed.
================(Build #4085 - Engineering Case #631733)================
When connecting the MobiLink Monitor to a MobiLink server, any authentication
error resulted in a poor error message from the Monitor, like:
"Got unexpected data when receiving authentication result. Check
version of MobiLink server (opcode=0)"
This has been fixed to provide more information on the problem. The most
common authentication error is now:
"Invalid userid or password (auth_status=NNNN)"
Other errors, for example due to an expired password, are similar.
================(Build #4084 - Engineering Case #631643)================
Changes made for Engineering case 585456 caused the queue lengths in the
Utilization Graph of the MobiLink Monitor, the RAW_TCP_STAGE_LEN, STREAM_STAGE_LEN,
HEARTBEAT_STAGE_LEN, CMD_PROCESSOR_STAGE_LEN metrics printed by the -ppv
option, and the queue lengths available in the SQL Anywhere Monitor, to possibly
have been reported as larger than they actually were. These issues have
been fixed.
================(Build #4051 - Engineering Case #624041)================
If the action command in the message handler did not contain any arguments,
the MobiLink Listener may have crashed. This has been fixed.
================(Build #3790 - Engineering Case #548463)================
Using the Certificate creation utility (createcert) to create certificates
or certificates requests would have failed with the error "Error occurred
encoding object", when provided non-ASCII input. This has been fixed.
================(Build #3767 - Engineering Case #544613)================
In an SIS environment, if a MobiLink client device went offline (device a),
and then another client device (device B) came online with the same device
address (ie. IP address/port) as A, and an SIS UDP notification for client
A was sent by the notifier, then client B would have received and rejected
the notification with an error similar to the following:
Error: <Notifier(QAnyNotifier_client)>: Request 1604437321 is accepted
by invalid respondent 'ias_receiver_lsn'. Please check the message filters
on the listener
This error would have happened whenever a UDP notification for client A
was sent, resulting in wasteful SIS notifications. This has now been fixed.
For 9.0.2, the fix was made only for MobiLink with messaging (QAnywhere)
for ASA consolidated databases. In later versions, the fix applies to all
MobiLink Notifiers in all supported consolidated databases.
================(Build #3739 - Engineering Case #539799)================
Notifier errors were cataglorized as MobiLink server errors. Errors such
as failing to resolved a delivery path to a remote device, and/or failing
a push attempt, was resulting in an error line in the MobiLink server log
that began with an "E.". This also caused an new entry in the system
event log. Notifier errors can be highly repeatative, if the business logic
was not implemented in a way that minimized failing attempts. Since these
failures are not affecting syncs, they have been recataglorized as informational
messages that begin with "I." instead. Two sub labels, "<SISI>"
and "<SISE>" have also been added to differentiate notifier
informational message and notifier error message.
Notifier informational message will now take on the following format:
I. YYYY-MM-DD HH:MM:SS. <Main> <SISI> ...
Notifier error message will now take on the following format:
I. YYYY-MM-DD HH:MM:SS <Main> <SISE> ...
================(Build #3720 - Engineering Case #535235)================
The Listener utility (dblsn) with persistent connection turned off, may have
failed to confirm message delivery or action execution. This may also have
caused the MobiLink server to report protocol errors. This has been fixed.
================(Build #3699 - Engineering Case #498343)================
When the visual version of the Dbmlsync Integration Component activex was
used on Japanese Windows XP, the font selected for the log window did not
support Japanese characters. As a result any Japanese text printed to the
log window was garbled. An appropriate font is now used.
================(Build #3680 - Engineering Case #498031)================
The MobiLink File Transfer utility (mlfiletransfer) did not send liveness
packets. This meant that for downloads of large files, the MobiLink server
would have timeouted the client. This has been fixed.
================(Build #3670 - Engineering Case #496545)================
The Certificate Creation utility createcert would have generated invalid
server certificates when signing them using a CA certificate generated by
gencert (the previous certificate generation utility). Although the server
certificate itself looked fine, clients would not have been able to properly
identify the trusted CA certificate that signed it, and so it would have
been rejected as untrusted, even when the client had the correct CA certificate
in its list of trusted CAs. This has been fixed.
================(Build #3615 - Engineering Case #487169)================
The Listener may have displayed an error dialog shortly after startup when
handling notifications. This problem was timing sensitive, subsequent errors
would have gone into the log file and to the console only. This has been
fixed so that errors in handling notification will no longer cause an error
dialog to be displayed. A workaround is to add the -oq switch to the dblsn
command line.
================(Build #3611 - Engineering Case #486546)================
The Certificate Creation utility createcert allowed users to create certificates
using ECC curves that were not supported by MobiLink or SQL Anywhere servers
or clients. This has been fixed. The list of supported curves has been
reduced to the following seven curves: sect163k1, sect233k1, sect283k1, sect283r1,
secp192r1, secp224r1 and secp256r1.
================(Build #3588 - Engineering Case #483427)================
An attempt to stop a MobiLink service using the Service utility dbsvc would
have failed with a message like "dbmlstop: No such file or directory".
This has been fixed.
================(Build #3586 - Engineering Case #481845)================
The following fixes have been made to the Listener utility
1) IP tracking was sometimes not firing BEST_IP_CHANGE event when the Listener
was run on Windows CE.
2) Engineering case 466446 introduced a problem in the Listener where options
following the options -ga or -gi may have been misinterpreted.
3) Asynchronous IP tracking (-ga) was not working on Windows CE devices.
Note, the Listener command line option -ga has been deprecated and asynchronous
IP tracking is now implicit. The default of -gi has been changed from 10
seconds to 60 seconds. The polling mechanism now serves only as a backup.
Users should not need to use -gi except for trouble shooting.
================(Build #3582 - Engineering Case #482832)================
When the visual form of the Dbmlsync Integration Component was used on Japanese
Windows 2000, the font selected for the log window did not support Japanese
characters and so these were not rendered correctly. This problem did not
occur on Windows XP. This problem has been fixed on Japanese Windows 2000
only, as it does not occur in any other environment.
================(Build #3577 - Engineering Case #482373)================
Any MobiLink utility with a GUI could have crashed when it attempted to display
a large message (ie. greater than 28,000 bytes), when the application was
running in minimized mode. This problem affects only Windows systems, and
was more likely to have occurred if the application was running with the
full verbosity enabled (-dl command line option). This has now been corrected.
================(Build #3554 - Engineering Case #479090)================
When the MobiLink listener (dblsn) used both non-persistent connections (i.e.
using -pc- switch explicitly) and confirmation, the notifier may have missed
doing some fallback pushes when the listener was in the middle of confirming
a previous notification. The problem has been fixed.
================(Build #3547 - Engineering Case #476819)================
When User Account Control (UAC) is enabled, applications cannot have been
registered with the provider for Windows Mobile Device Center (WMDC), either
from dbasinst or from the provider itself. The reason for this was that applications
are registered in the registry under HKEY_LOCAL_MACHINE, which requires administrator
privileges when UAC is enabled. Although dbasinst has been elevated, and
can set these registry entries, the provider was not. This has been corrected
so that applications are now registered under HKEY_CURRENT_USER instead.
However, this means that applications will now only be registered with a
specific user, instead of automatically being registered with all users of
a particular machine. The first time dbasinst is invoked after the patch
has been applied, it will automatically move all registry entries from HKEY_LOCAL_MACHINE
to HKEY_CURRENT_USER.
================(Build #3543 - Engineering Case #477203)================
When using the Dbmlsync Integration Component (ActiveX) and the Message
event was called, the msgID value was set to 0 for some messages that should
have had unique identifiers. In particular this was reported for the message
"Unable to find log offset ....", although other messages were
likely affected as well. The msgID value for these messages would not have
been filled in correctly with a unique value. This has now been corrected.
================(Build #3535 - Engineering Case #476077)================
The 64 bit versions of mlstop.exe and mluser.exe not included in the 10.0.1
install. This has been corrected.
================(Build #3527 - Engineering Case #474617)================
The certificate utilities createcert.exe and viewcert.exe as well as dbcecc10.dll
were missing version info. This has been fixed.
================(Build #3522 - Engineering Case #473528)================
A long running Listener utility dblsn.exe may have crashed when persistent
connections were used. This has been fixed.
================(Build #3512 - Engineering Case #472777)================
When the Certificate Creation utility (createcert) prompted for an encryption
type, RSA vs ECC, it accepted lowercase 'r' or 'e', but not uppercase 'R'
or 'E'. This has been corrected.
================(Build #3502 - Engineering Case #470594)================
The MobiLink Listener would likely have failed to send confirmation of notification
delivery or confirmation of actions, if persistent connections were explicitly
turned off (ie dblsn.exe -pc- ). This problem has been fixed.
================(Build #3500 - Engineering Case #470335)================
To determine if a downloaded file has changed, the MobiLink File Transfer
utility calculates a hash of its copy of the file and sends it up to the
server. The server calculates a hash of its own copy and compares it with
the hash of the remote file, and if they don't match, it transmits its entire
copy of the file down to the remote. However, if the only change to the
file is that extra bytes had been appended to the end, the file was transmitted
incorrectly and the resulting file on the remote contained only the new portion
-- the original portion of the file was lost. This has been fixed.
================(Build #3490 - Engineering Case #468602)================
The MobiLink file transfer utility mlfiletransfer did not display national
characters properly on the command line. This has been fixed.
================(Build #3482 - Engineering Case #466446)================
Two command line options have been added to the MobiLink listener (dnlsn)
for controlling ip tracking behavior.
1. The -gi option controls the ip tracker polling interval. The default
is 10 seconds.
example: dblsn.exe -gi 30
2. The -ga option is for asynchronous ip tracking. The -gi option is ignored
when -ga is used.
example: dblsn.exe -ga
================(Build #3470 - Engineering Case #462782)================
If MobiLink authentication failed, the MobiLink File Transfer utility (mlfiletransfer)
would not have reported a useful error message. This has been corrected so
that it will now report "MobiLink authentication failed". Also,
the methods MLFileTransfer and ULFileTransfer will now return the stream
error code STREAM_ERROR_AUTHENTICATION_FAILED.
================(Build #3470 - Engineering Case #462291)================
The MobiLink File Transfer utility (mlfiletransfer) was accepting an empty
string as a valid script version (e.g. mlfiletransfer -v "" ...).
This has been fixed. The empty string is now rejected, just as if no script
version was supplied at all.
Note, this fix also applies to the MLFileTransfer and ULFileTransfer methods
in the various UltraLite interfaces.
================(Build #4122 - Engineering Case #640205)================
In some cases, the iAS ODBC driver for Oracle could aborted the operation
and given the following Oracle error:
ORA-03145: I/O streaming direction error
This would have occurred when the driver was used to send NULL BLOBs to
a table in an Oracle database and then the rows were fetched back from this
table using the same database connection, and the Oracle database was running
with a multi-byte character set. This has now been fixed.
================(Build #4106 - Engineering Case #632612)================
The iAnywhere ODBC driver for Oracle could have crashed, if an application
made a request to convert an invalid SQL statement (for instance, a SQL statement
containing a '{' that was not followed by 'call') to native SQL by calling
SQLNativeSQLW. This has been fixed.
================(Build #4087 - Engineering Case #632889)================
When using the iAS Oracle ODBC Driver, a call to SQLGetStmtAttr that queried
the SQL_ATTR_CONCURRENCY, SQL_ATTR_CURSOR_TYPE, SQL_ATTR_CURSOR_SENSITIVITY
or SQL_ATTR_QUERY_TIMEOUT attributes could have returned a random value for
the attribute. The driver now throws an "Optional feature not implemented"
error (SQL State HYC00) for the SQL_ATTR_CONCURRENCY, SQL_ATTR_CURSOR_TYPE,
and SQL_ATTR_CURSOR_SENSITIVITY attributes. When the SQL_ATTR_QUERY_TIMEOUT
is queried, a zero is returned, and no error is reported.
================(Build #4085 - Engineering Case #631405)================
If a result set contained a column with ROWID values, the iAnywhere Oracle
driver would have returned invalid OUT parameters from calls to SQLColAttribute
for the SQL_COLUMN_TYPE and SQL_DESC_DISPLAY_SIZE identifiers. As a workaround,
the select statement could use ROWIDTOCHAR(ROWID) instead of ROWID. This
has been fixed so that the calls to SQLColAttribute will now describe the
column in the result set as a SQL_WCHAR of length 18.
================(Build #4072 - Engineering Case #627776)================
On big-endian machines, the 64-bit iAS ODBC driver could have returned random
values when an application was trying to retrieve the following statement
attributes:
SQL_ATTR_METADATA_ID
SQL_ATTR_ROW_NUMBER
SQL_ATTR_ROW_ARRAY_SIZE
SQL_ATTR_PARAMSET_SIZE
SQL_ATTR_MAX_LENGTH
SQL_ATTR_MAX_ROWS
The driver was using SQLUINTEGER for these statement attributes. It has
been corrected to now use SQLULEN.
================(Build #4070 - Engineering Case #628952)================
The MobiLink server would have shown the following error:
Invalid Date-time segment. Year value out of range
and aborted any synchronization requests, if the consolidated database was
running on an ASE server with a database that was using a multi-byte charset,
and the MobiLink server was running on a Windows system that was using a
non-English Date format. This problem has now been fixed.
================(Build #4061 - Engineering Case #626951)================
The iAS Oracle ODBC driver could not detect connection status correctly when
a connection was forcibly disconnected by the server. Due to this problem,
an application, such as the MobiLink server, might not re-establish a new
connection and would have reported the same errors repeatedly. This problem
is fixed now.
================(Build #3921 - Engineering Case #573625)================
If an application asked for the connection status through the following ODBC
API
SQLGetConnectAttr( hdbc, SQL_ATTR_CONNECTION_DEAD, ... )
after an error occurred, the iAnywhere Solutions ODBC driver for Oracle
could told the application that the connection was still alive, even though
the connection had actually been killed, or had timed out. If this problem
occurred for the MobiLink server main connection, in most cases, the server
would have displayed the following messages:
[10009] MobiLink table 'ml_scripts_modified' is damaged
[-10020] Unable to flush scripts
and refused any synchronization requests. This MobiLink server would then
have needed to be be restarted in order to carry on any synchronization.
This problem is fixed now.
================(Build #3898 - Engineering Case #574354)================
The iAS ODBC driver for Oracle could have crash in a stored procedure call,
if the stored procedure contained char or varchar type INOUT parameters,
and the data length of these parameters was greater than 2000 bytes (1000
bytes for driver versions, 9.0.2 and 10.0.1). This has now been fixed.
================(Build #3894 - Engineering Case #570915)================
When the iAS ODBC driver for Oracle was used by the MobiLink server to upload
multiple CHAR type columns to a consolidated database running on an Oracle
9.2.0.8 server, it could have failed with the error;
"ORA-01461: can bind a LONG value only for insert into a LONG column"
This problem has now been fixed.
================(Build #3834 - Engineering Case #556326)================
Applications running on Unix systems, and using the iAS ODBC driver for Oracle,
could have received an "Out of memory" error when calling SQLTables,
SQLColumns, SQLPrimaryKeys, SQLForeignKeys, SQLProcedureColumns, SQLProcedures,
or SQLStatistics. This problem has now been fixed.
================(Build #3784 - Engineering Case #547049)================
After calling SQLGetTypeInfo, the application would not have been able to
get the column names through problem could have prvented exporting MobiLink
Monitor data to an Oracle database. This has now been fixed.
================(Build #3778 - Engineering Case #546072)================
The iAS ODBC driver for Oracle could have crashed when the application tried
to create multiple connections concurrently. This problem was more likely
to have occurred on Unix systems. This problem has now been fixed.
================(Build #3715 - Engineering Case #533749)================
The iAS ODBC driver for Oracle could have given mangled error and warning
messages to the application when it was running on a operating system that
used a multi-byte character set, such as a Japanese or Chinese. This problem
is now fixed.
================(Build #3686 - Engineering Case #499969)================
When the event of a download_cursor, or a download_delete_cursor, in a MobiLink
server synchronization logic was written as:
{call procedure_name( ?, ? )}
for consolidated databases running on an Oracle server, the iAS ODBC driver
for Oracle may have given the error:
ORA-06553: PLS-306: wrong number or types of arguments in call to 'procedure_name'
if the stored procedure returned a result set and the word, "call"
was not all in lower-case. This has now been fixed.
================(Build #3670 - Engineering Case #496568)================
The iAS ODBC driver for Oracle could have shown poor performance when concurrent
access was required by multi-threaded applications, such as the MobiLink
server. This problem has been corrected.
================(Build #3649 - Engineering Case #492667)================
If a Windows application called the function SQLColAttribute() with SQL_DESC_OCTET_LENGTH
when using the iAS ODBC driver for Oracle, it could have been returned the
transfer octet length in characters, rather than in bytes. Due to this problem,
the application could have incorrectly truncated data. This problem has now
been fixed.
Note, this problem should not happen if the application is the MobiLink
server. The MobiLink server does not call the ODBC function SQLColAttribute().
================(Build #3635 - Engineering Case #490229)================
An application using the iAS ODBC driver for Oracle may have crashed if a
SQL statement caused an error on the Oracle database server or the OCI library,
and if the error message returned from the Oracle server or the OCI library
was greater than 466 bytes in length. This problem is now fixed.
================(Build #3633 - Engineering Case #489741)================
If an application using the iAS Oracle driver issued a "call procedure_name"
statement (without open and close parentheses) through the ODBC functions
SQLPrepare or SQLExecDirect, and the procedure "procedure_name"
returned a result set, the driver could have crashed when the "Procedure
returns results" check-box was checked on Windows, or the "ProcResults"
entry was set to 'yes' on UNIX. This has now been fixed.
================(Build #3605 - Engineering Case #485483)================
If an application used the iAnywhere ODBC driver for Oracle to fetch result
set from a packaged procedure, the driver would have reported the following
error:
[Sybase][iAnywhere Solutions - Oracle][Oracle]ORA-06553: PLS-306: wrong
number or types of arguments in call to {procedure name}
This problem could have caused the MobiLink server to fail the download,
when a download_cursor or download_delete_cursor event was written as:
{ call package_name.procedure_name ( ?, ?, ...) } or
{ call schema_name.package_name.procedure_name( ?, ?, ... ) }
This problem has been fixed. Now this event can be written as:
{ call [schema_name.][package_name.]procedure_name( ?, ?, ... ) }
================(Build #3571 - Engineering Case #480434)================
The iAS ODBC driver for Oracle could have returned numeric data with a comma
"," as the decimal point on some international installations if
the Oracle database used commas as the decimal point. However, the MobiLink
server is able to handle numeric data that uses commas as a decimal point
for download, which would have caused it to abort the synchronization. This
problem has now been fixed.
================(Build #3474 - Engineering Case #464640)================
The IAnywhere Oracle ODBC driver could have crashed if the following ODBC
API functions were called in this order:
SQLAllocHandle( ..., SQL_HANDLE_STMT, ...) (returns SQL_SUCCESS)
SQLExecDirect( ..., "select ...", ... ) (returns SQL_SUCCESS)
SQLExecDirect( ..., "insert...", ...) (returns SQL_ERROR)
SQLFreeStmt( ..., SQL_UNBIND) with the same statement handle.
The number of columns of the result set was not reset to zero when the same
statement handle was reused. This problem has now been fixed.
================(Build #3589 - Engineering Case #483578)================
When creating a download using the MobiLink Java direct row API, some actions
could haver destabilized the MobiLink server. Setting parameters with incompatible
data, or setting columns multiple times with null values that were not nullable,
could have caused the MobiLink server to send an invalid row to the synchronization
client, or crash. This has been fixed.
================(Build #3576 - Engineering Case #481994)================
When using the MobiLink Java DirectRow api, setting or getting data of types
Date, Time or Timestamp could have worked incorrectly. When using a ResultSet,
the returned value could have been null, When using a PreparedStatement,
the value could have been set as null. This has been fixed.
================(Build #3503 - Engineering Case #471044)================
When accessing column values in the upload using Java direct row handling,
the ResultSet.getObject() method could have returned null instead of an object
representing the column value. Methods such as ResultSet.getString(), ResultSet.getInteger()
etc. would have worked correctly. This is now fixed.
================(Build #3493 - Engineering Case #468558)================
The MobiLink plug-in may have failed to create an index on the timestamp
column used for timestamp-based synchronization if the consolidated database
was an Oracle consolidated database. The script the plug-in uses (in ml-template.zip)
has been fixed and the column is now properly indexed.
================(Build #4218 - Engineering Case #668141)================
Calling the method SATcpOptionsBuilder.ClientPort could have caused the exception
InvalidCastException to have been thrown.
For example:
SATcpOptionsBuilder options = new SATcpOptionsBuilder( "localonly=yes;port=6873"
);
string cport = options.ClientPort;
This problem has been fixed.
================(Build #4214 - Engineering Case #667441)================
Misleading error messages would have been returned to the client when opening
a connection using an invalid DSN. This problem has been fixed.
================(Build #4202 - Engineering Case #663470)================
On CE devices, if multiple applications were running simultaneously, the
library dbdata.dll could have been deployed multiple times to the temp directory.
This problem has been fixed.
Additionally, the version number has been added to the native dll name (i.e.
dbdata12.dll). This will allow running multiple versions of the provider
simultaneously on Windows CE.
================(Build #4193 - Engineering Case #661459)================
The .NET provider was incorrectly assuming that a Sybase IQ 12.7 server supported
the NCHAR datatype. This resulted in a failure to establish a connection
to a Sybase IQ 12.7 server. This problem has been fixed.
================(Build #4175 - Engineering Case #656481)================
An application using the ADO .Net provider, and calling the method SAConnection(),
could not have successfully connected to an IQ 12.7 server. A run-time error
(iAnywhere.Data.SQLAnywhere.SAException) would have occurred when the provider
tried to parse the server version string. This problem has been resolved.
================(Build #4171 - Engineering Case #654446)================
If there were multiple applications running simultaneously, the ADO.NET provider
could have failed to load dbdata.dll. This has now been fixed.
================(Build #4131 - Engineering Case #643822)================
Schema locks were not being released when the execution of ExecuteReader()
encountered an exception. If a BeginTransaction was called, a Rollback or
Commit should be called by the application to release the locks. Now, if
BeginTransaction is not called, the transaction will be automatically rolled
when an exception is encountered.
================(Build #4120 - Engineering Case #640786)================
Calls to the GetSchema method would have return an error when the restrictions
vector size was less than the total number of restrictions. For example,
if 2 restrictions were specified for a schema rowset that took up to 3 restrictions,
the GetSchema call would have resulted in an error indicating that 3 restrictions
were expected. The error was due to the fact that the array size is 2, not
3. This problem has been fixed.
================(Build #4112 - Engineering Case #638231)================
The property SAConnection.State would have indicated that the connection
was still open even after the connection had been dropped. This has now been
corrected.
================(Build #4111 - Engineering Case #637909)================
Executing a stored procedure and fetching the result set would have thrown
the exception "Index was outside the bounds of the array" if the
stored procedure selects results from a local temporary table with blob columns.
The provider was determining the row buffer length prior to opening the cursor,
this has been corrected so that it is done after the cursor has been opened.
================(Build #4111 - Engineering Case #637725)================
The Available objects list would have been empty when creating SQL Server
Integration Services data source view. This problem has been fixed.
================(Build #4096 - Engineering Case #634504)================
SQL Anywhere ODBC data sources were not listed in Visual Studio's Add Connection
dialogbox on 64 bit Windows systems. This has now been fixed.
================(Build #4085 - Engineering Case #632608)================
The performance for fetching BLOB columns was much slower compared with the
managed OLE DB provider. This problem has been corrected.
================(Build #4079 - Engineering Case #631249)================
The result sets returned by calls to SAConnection.GetSchema( "Columns"
) and SAConnection.GetSchema( "DataTypes" ) could have been incorrect.
This has been fixed.
================(Build #4078 - Engineering Case #631026)================
When using the ADO.NET provider to insert long binary, varchar or nvarchar
values with a SQL_BINARY, SQL_VARCHAR or SQL_NVARCHAR parameter type, the
parameter type that is passed to the server will be changed to SQL_LONGBINARY,
SQL_LONGVARCHAR or SQL_LONGNVARCHAR if the length of the value to be inserted
is greater than 32767 bytes.
================(Build #4077 - Engineering Case #630913)================
If some columns had been dropped from a table, SAConnection.GetSchema( "Columns"
) could have returned incorrect ORDINAL_POSITION values for that table. This
has been fixed.
================(Build #4077 - Engineering Case #630911)================
Some result sets returned by by the method SAConnection.GetSchema() were
not sorted. This has been corrected.
================(Build #4077 - Engineering Case #630909)================
Calls to the method SADataAdapter.Update() in batch update mode would have
hung when updating large tables. This has been fixed.
================(Build #4076 - Engineering Case #630542)================
In the SADataReader's schema table, , the SCALE property for Date, DateTime,
DateTimeOffset, SmallDateTime, Time, Timestamp, and Timestamp with time zone
columns, has been changed to 6.
================(Build #4076 - Engineering Case #630540)================
The method SAConnection.ServerVersion() has been changed to return normalized
version strings that match the strings returned by SqlConnection.ServerVersion().
================(Build #4076 - Engineering Case #630408)================
The method SAConnection.GetSchema() would have returned incorrect data. Database
objects owned by system accounts were being included in the result sets.
They are now excluded.
================(Build #4074 - Engineering Case #629758)================
The SAConnection.GetSchema method returned incorrect schema data for the
DataTypes schema set and the DataSourceInformation schema set. This problem
was found using the SQL Server Integration Service's Import and Export Wizard.
This has now been corrected.
================(Build #4072 - Engineering Case #629304)================
The values for DataSourceProductVersion and DataSourceProductVersionNormalized
returned by the SAConnection.GetSchema method didn't match the ADO.NET specification.
The normalized version string should have been like nn.nn.nn.nnnn. For example,
SQL Server 2008 would return "DataSourceProductVersion = 10.00.1600,
DataSourceProductVersionNormalized = 10.00.1600". SQL Anywhere was returning
"DataSourceProductVersion = 12.0.0.1234, DataSourceProductVersionNormalized
= 12.0.0". This has now been corrected.
================(Build #4069 - Engineering Case #628587)================
A multithreaded application could have failed to load the unmanaged dll.
This has now been corrected.
================(Build #4066 - Engineering Case #625219)================
A prepared statement was not being dropped when an exception occurred while
calling the method SACommand.ExecuteReader. This problem has been fixed.
================(Build #4065 - Engineering Case #627780)================
The Start Server in Background utility (dbspawn) would have failed to start
a database server if the full path to the server executable was given and
that path contained a space. This has now been fixed.
================(Build #4046 - Engineering Case #622789)================
Applications running on on Windows 7 64 bit systems could have crashed when
canceling the methods EndExecuteReader or EndExecuteNonQuery. This problem
has been fixed.
================(Build #4033 - Engineering Case #619719)================
ADO.Net client applications could have hung in very rare circumstances when
fetching data readers. This problem has been fixed.
================(Build #4024 - Engineering Case #617178)================
The asynchronous command execution ( BeginExecuteReader and BeginExecuteNonQuery
) could have been blocked by exclusive table locks. This problem has been
fixed.
================(Build #4023 - Engineering Case #617699)================
The message "Statement interrupted by user" was not being returned
after a user canceled a command. This problem has been fixed.
================(Build #3985 - Engineering Case #605792)================
If an internal connection was the cause of a diagnostic message, it might
have been identified with the phrase 'another user'. A more descriptive
string identifying the connection will now be used. For example, one might
now get a diagnostic message such as: User 'Cleaner' has the row in 'x' locked
(SQLCODE: -210; SQLSTATE: 42W18)
================(Build #3969 - Engineering Case #591833)================
The ADO.NET provider could have failed to unpack and load dbdata.dll. A race
condition has been fixed.
================(Build #3946 - Engineering Case #586217)================
Methods in the SADataReader clase could have returned an incorrect fraction
for datetime and time columns. This has now been corrected.
================(Build #3923 - Engineering Case #580607)================
User Impersonation on IIS caused Win32Exception "Access is denied".
This has been fixed.
================(Build #3915 - Engineering Case #577974)================
Using an ADO.NET Entity Data Model object with an ASP.NET Web Site project
did not work correctly. This has been corrected.
================(Build #3914 - Engineering Case #578172)================
Calls to the methods SADataReader and SADataAdapter would not have returned
any data for temporary tables. An AutoCommit when opening a data reader would
have dropped temporary tables. This has now been fixed.
================(Build #3831 - Engineering Case #555185)================
It was not possible to retrieve the columns for a primary key index using
the method SAConnection.GetSchema. The SAConnection.GetSchema method was
using a view which did not include the columns for primary keys. This problem
has been fixed.
================(Build #3830 - Engineering Case #554779)================
It was not possible to retrieve index or key information through calls to
the SAConnection.GetSchem method. The SAConnection.GetSchema method was using
a view which did not include the primary keys. This problem has been fixed.
================(Build #3817 - Engineering Case #552931)================
When executing a batch command which returned multiple resultsets, if fetching
the second or subsequent resultset caused an error, no exception was returned
to the application. This problem has now been fixed.
================(Build #3782 - Engineering Case #486086)================
Long exception message generated by the provider could have been truncated.
This problem has been fixed.
================(Build #3724 - Engineering Case #536608)================
Calling sa_set_option('AcceptCharset', '+'); from within a stored procedure
that is called via an HTTP request should set the response to the database
charset whenever possible, but when a client specified the database charset
it was only selected when its q-value was among the highest. This has been
fixed so that the response uses database charset if specified by the client,
regardless of q-value preference.
Example:
SA server uses ISO-8859-1 charset,
client specifies Accept-Charset:UTF-8,IBM850;q=0.8,ISO-8859-1;q=0.5
Although least preferred by the client, SA will respond with ISO-8859-1
if SA_SET_HTTP_OPTION('AcceptCharset', '+'); has been called (from within
a procedure servicing the HTTP request).
================(Build #3724 - Engineering Case #536563)================
The insert performance sample (instest) shipped with SQL Anywhere did not
correctly assign values to integer columns on 64-bit big-endian platforms.
Depending on the definition of the table being used, may have caused instest
to terminate prematurely. The instest sample has now been corrected. This
problem can be worked around by modifying the FillItem() function in instest.sqc
to use an "int", instead of a "long", in the cast performed
for the DT_INT/DT_UNSINT case.
================(Build #3716 - Engineering Case #533979)================
The columns from stored procedure result sets were not being excluded when
Visual Studio enumerated stored procedure parameters. This has now been corrected.
================(Build #3714 - Engineering Case #533570)================
If a row was deleted, the delete rolled back, and then the row was updated,
a snapshot scan may have seen the updated value before the update was committed.
This has now been fixed.
================(Build #3713 - Engineering Case #533478)================
A multi-threaded client application, using the ADO .Net provider, could have
crashed, hung or leaked memory if it did not handle thread synchronization
properly. This problem has now been fixed.
================(Build #3710 - Engineering Case #532999)================
The database property 'VersionStorePages' was reporting the total number
of pages in the temporary file rather than the number of pages in the version
store. This has now been fixed.
================(Build #3701 - Engineering Case #531348)================
Table locks were not released by the SABulkCopy() method when SABulkCopyOptions.LockTable
is specified. This problem has been fixed.
================(Build #3701 - Engineering Case #530917)================
An application could have hung when opening a pooled connection. The hang
was as a result of two problems:
1. The provider was incorrectly calculating a very long timeout period.
2. Dropped connections were not being recycled.
These problems have now been fixed.
================(Build #3697 - Engineering Case #530041)================
When an SACommand object's Connection property was null, and the methods
ExecuteNonQuery, ExecuteReader or ExecuteScalar were called, a wrong error
message would havew been given: "Unable to load DLL 'dbdata.dll' : The
specified module could not be found."
For example:
SACommand cmd = new SACommand();
cmd.CommandText = "UPDATE customers SET name='1' WHERE name='1'";
cmd.ExecuteNonQuery();
This problem has been fixed.
================(Build #3690 - Engineering Case #500905)================
The Options dialog for the SQL Anywhere Explorer did not display properly
when using a large font. This has been fixed by changing the size and location
of some of the controls.
================(Build #3662 - Engineering Case #495375)================
The result set returned when SAConnection.GetSchema("ProcedureParameter")
was called, would have been in an unpredictable order. This has been fixed
by adding an ORDER BY clause to the query statement.
================(Build #3657 - Engineering Case #494184)================
The SACommandBuilder class did not implement the QuoteIdentifier method.
The QuoteIdentifier method has now been added.
================(Build #3639 - Engineering Case #490564)================
A recordset update may have failed when one of more column values were null.
The OLEDB provider failed to correctly identify the primary key columns in
a table, and this resulted in an UPDATE statement containing a WHERE clause
that was overly complex. This problem has now been fixed.
================(Build #3610 - Engineering Case #486469)================
The Data adapter wizard would have shown errors when generating commands.
The errors were cause by exceptions when executing a command which still
had an open data reader. Fixed by using new commands.
================(Build #3610 - Engineering Case #486465)================
The installer for the SQL Anywhere Explorer SetupVSPackage.exe was failing
to check if Visual Studio was installed before installing the integration
package, leading to an exception. This has been corrected.
================(Build #3609 - Engineering Case #486531)================
The SQL Anywhere Explorer now supports Visual Studio 2008. Registry settings
for Visual Studio 2008 are now created, and the integration dll has been
modified to support Visual Studio 2008. Note that assemblies built with Visual
Studio 2005 can be used in Visual Studio 2008 as well.
================(Build #3608 - Engineering Case #485816)================
When using the version 10.0 provider to connect to an older database, if
the application did not specify that the parameter.SADbType=SADbType.varchar
(or .char), it would have defaulted to NVarChar. This would have resulted
in the error 'not enough values for host variables' being returned to the
application. This has been corrected so that the provider now maps NChar
to Char, NVarChar to VarChar, LongNVarchar to longVarchar and NText to Text,
if the server version is 9.
================(Build #3606 - Engineering Case #485568)================
Multi-threading applications could have failed with a number of symptoms.
These symptoms include 'Resource governor for prepared statements limit exceeded'
and 'Communication error' errors, as well as client application crashes.
This problem has been fixed.
================(Build #3606 - Engineering Case #483227)================
As well as the fixes to correct multi-threaded applications in Engineering
case 485568, changes have also been made to thread synchronization to prevent
hangs when running on multi-processor machines.
================(Build #3592 - Engineering Case #484281)================
When sending an attachment over SMTP using xp_sendmail(), there would have
been an extraneous zero length file with the same name sent along with the
real file attachment. This has been fixed.
================(Build #3592 - Engineering Case #483261)================
The methods ClearAllPools and ClearPool could have caused an exception if
any of the connections in a pool were not opened. This problem has been fixed.
================(Build #3589 - Engineering Case #483316)================
The .NET provider could have gone into an endless loop, with very high CPU
usage, on termination of the application. This has been corrected.
================(Build #3560 - Engineering Case #480211)================
When one connection was blocking on a WAITFOR statement, all other connections
were also blocked until it finishes. This problem has been fixed.
================(Build #3559 - Engineering Case #479953)================
A call to SADataReader.Read() could have caused an exception after calling
SADataReader.NextResult(), if the resultset was empty. The SADataReader.Read()
method was failing to check the resultset before fetching data. This problem
has been fixed.
================(Build #3548 - Engineering Case #475472)================
In very rare circumstance, an application could have hung in iAnywhere.Data.SQLAnywhere.
This problem has been fixed.
================(Build #3539 - Engineering Case #474596)================
The 10.0.1 Maintenance Release and subsequent EBFs did not correctly update
the ADO.NET Data Provider. This has been fixed so that the provider is now
updated.
The following steps are a work around:
- From the Control Panel launch the Add or Remove Programs application.
- In the SQL Anywhere 10 item click the Change button.
- In the maintenance Welcome dialog select the Modify option.
- In the Select Features dialog deselect the ADO.NET Data Provider feature
and proceed with installation maintenance (this will uninstall the feature).
- Repeat the above, but this time select the ADO.NET Data Provider and proceed
with installation maintenance (this will reinstall the correct version of
the feature).
================(Build #3537 - Engineering Case #476347)================
If a command resulted in an exception, calling the same command again would
have caused the exception "Attempted to read or write protected memory".
For example:
SACommand cmd = new SACommand("INSERT INTO t VALUES (1)",
conn);
try
{
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
Console.WriteLine("Insert failed. " + ex.Message + "\n"
+ ex.StackTrace);
}
try
{
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
Console.WriteLine("Insert failed. " + ex.Message + "\n"
+ ex.StackTrace);
}
This problem has been fixed.
================(Build #3534 - Engineering Case #477150)================
Setting SAParameter.SADbType could have caused an IndexOutOfRangeException
in multi-threading code. This problem has been fixed.
================(Build #3528 - Engineering Case #474312)================
An existing dbdata10.dll in an application's folder would still have been
loaded by the .NET Common Language Runtime. This problem has been fixed.
================(Build #3500 - Engineering Case #470198)================
The utility SetupVSPackage.exe was not respecting the language setting as
defined by DBLANG. This has been fixed.
================(Build #3499 - Engineering Case #468033)================
Inout parameters were not returned by SADataAdapter.Update when using SADataAdapter
and stored procedures to update data. This problem has now been fixed.
================(Build #3489 - Engineering Case #467594)================
Calling SADataReader may have caused extra characters to have been returned
for string values. A miscalculation of the strings length has been fixed.
================(Build #3488 - Engineering Case #468582)================
The SQL Anywhere 10.0.1 Maintenance Release was not updating the ADO.NET
Data Provider file: iAnywhere.Data.SQLAnywhere.dll. This has been fixed
so that susequent EBFs will update this file.
================(Build #3482 - Engineering Case #466513)================
Executing an SQL statement with input parameters could have caused a memory
leak. This problem has been fixed.
================(Build #3477 - Engineering Case #463466)================
The ADO.Net provider could have thrown the exception 'Resource governor limit
for prepared statements exceeded' if the application issued statements which
contained multi-select statement.
For example:
BEGIN
DECLARE @MyConnectionId INTEGER;
DECLARE @MaxCount INTEGER;
DECLARE @Prepared INTEGER;
SELECT CONNECTION_PROPERTY ( 'prepstmt' ) into @Prepared;
SELECT number INTO @MyConnectionId FROM sa_conn_list(-1);
SELECT Value INTO @MaxCount FROM sa_conn_properties(@MyConnectionId) WHERE
PropName='max_statement_count';
SELECT @MyConnectionId AS Id, @MaxCount AS MaxStatement, @Prepared as
Prepared;
END
This problem has been fixed.
================(Build #3473 - Engineering Case #463081)================
The property DataReader.HasRows would have always returned True, whether
there were actually rows in the dataReader or not. This problem has been
fixed.
================(Build #3472 - Engineering Case #463174)================
Using pooled connections could have caused exceptions when used in multi-thread
applications. This has been fixed.
================(Build #3471 - Engineering Case #446220)================
Calling a stored procedure that did not return a result set, could have caused
a "cursor not open" error. This problem has been fixed.
================(Build #3470 - Engineering Case #462000)================
The SAParameterCollection.AddWithValue method was not implemented. It has
now been added. The SAParameterCollection.AddWithValue method replaces the
SAParameterCollection.Add method that takes a String and a Object. The overload
of Add that takes a string and an object was deprecated because of possible
ambiguity with the SAParameterCollection.Add overload that takes a String
and a SADbType enumeration value where passing an integer with the string
could be interpreted as being either the parameter value or the corresponding
SADbType value. Use AddWithValue whenever there is a need to add a parameter
by specifying its name and value.
================(Build #4134 - Engineering Case #643421)================
If an application was attempting to connect to a server and the server shut
down between the time the protocol connection was made and the time the database
connection was attempted, the application could have crashed. This has been
fixed.
================(Build #4122 - Engineering Case #641485)================
Attempting to make a connection with invalid TCPIP protocol options could
have caused a crash in the client library. This has been fixed.
================(Build #4111 - Engineering Case #635466)================
When making a TCP connection to a remote machine that was unavailable (i.e.
powered off, network cable unplugged, etc.), the time taken to time out could
have been far longer than the value of the TIMEOUT parameter. This has been
fixed.
================(Build #4078 - Engineering Case #631023)================
SA Clients on Mac OS X systems would have received the error "TLS handshake
failure" (SQLCODE -829) when attempting to connect using TLS/RSA to
a server running on a different operating system. This has been fixed.
Note: Engineering case 626480 included new versions of the Certicom and
OpenSSL libraries. This problem only affects Mac OS X clients with these
new libraries connecting to servers on a different operating system also
with these new libraries.
================(Build #4035 - Engineering Case #619937)================
An embedded SQL application that attempted to fetch into a sqlvar of type
DT_VARCHAR, DT_NVARCHAR or DT_BINARY, with a negative sqllen, could have
crashed due to a memory overrun. A negative sqllen with these types is invalid
and should never be passed to DBLib. DBLib has now been fixed to make the
memory overrun less likely.
================(Build #4016 - Engineering Case #615254)================
If a percent character "%" was used in a RAISERROR statement, then
retrieving the error using sqlany_error() would have returned garbage characters
for the percent character. This has been fixed.
================(Build #3996 - Engineering Case #609708)================
When using the SQL Anywhere C API and binding parameters for prepared statements
and calling sqlany_execute() multiple times, the second and subsequent calls
to sqlany_execute() would have failed with the error "Cursor already
open". The problem was introduced as part of the changes for Engineering
case 560351. This has now been fixed.
================(Build #3996 - Engineering Case #609704)================
When calling the SQL Anywhere C API function sqlany_clear_error() the resulting
SQLSTATE value would have been set to the empty string instead of "00000".
This has been fixed.
================(Build #3990 - Engineering Case #607330)================
The SQL Anywhere C API was fetching the TINYINT data type as a signed value.
This has been fixed.
================(Build #3963 - Engineering Case #590383)================
On UNIX systems, there were directories left in the temporary directory,
with names of the form __SQLAnyCli__X_Y, where X is a process number and
Y is an alphanumeric number. This usually happened when a SQL Anywhere client
application was terminated abnormally. An example of this was the PHP driver
running within the Apache web server. This has been fixed.
================(Build #3939 - Engineering Case #584721)================
When using the SQL Anywhere C API, no error information was returned when
a connection attempt failed. This problem was introduced as part of a previous
fix to dbcapi, and has now been corrected.
================(Build #3895 - Engineering Case #573228)================
When concurrent connection requests were made to servers running on multi
core or multi processor Unix systems, connections could, in rare cases, have
hung, received communication errors, or otherwise failed. This has been fixed.
================(Build #3877 - Engineering Case #567417)================
The DBLib client library could have crashed if there was a language resource
problem, such as a missing language dll or .res file. In order for this crash
to have occurred, db_init must have been called at least twice, and then
another dblib call must have been made (such as db_string_connect or EXEC
SQL CONNECT). This has been fixed, and db_init will now return 0 on language
resource problems.
================(Build #3845 - Engineering Case #559632)================
An embedded SQL PREPARE or OPEN request could have caused the application
to crash in rare cases, if the connection was dropped before, or during,
the request. This has been fixed.
================(Build #3844 - Engineering Case #558713)================
If a simple SELECT statement was executed with the option Row_counts set
to 'on', the returned row count value on open may have incorrectly been zero.
This has been fixed
================(Build #3831 - Engineering Case #555450)================
Column names that are greater than 29 bytes in length were being truncated.
This has been fixed.
================(Build #3801 - Engineering Case #549682)================
In rare, timing-dependent circumstances, multi-threaded client applications
with multiple simultaneous TLS connections could have crashed. This has been
fixed.
================(Build #3778 - Engineering Case #546070)================
The operation executed after an embedded SQL application executed a FETCH,
could have caused a crash if the cursor was opened without WITH HOLD and
a COMMIT or ROLLBACK was done by a procedure or function called by the FETCH.
This has been fixed.
================(Build #3731 - Engineering Case #538865)================
Applications using TLS on Mac OS X systems, may have experienced crashes.
This has been fixed.
================(Build #3709 - Engineering Case #498395)================
In vary rare circumstances, a Solaris client application may have crashed
when attempting to connect to a server. This would have occurred if the communications
initialization code failed to allocate some memory. This has been fixed.
The client connect request will now receive a -86 "Client out of memory"
error in these instances.
================(Build #3687 - Engineering Case #500123)================
An embedded SQL application may have hung if all of the following conditions
were true:
- the applicaiton had registered a DB_CALLBACK_CONN_DROPPED callback using
db_register_a_callback()
- the application called db_fini to free the resources associated with a
sqlca.
- there was a least one active connection associated the sqlca (i.e. there
was a connection that had not been disconnected)
This was more likely to occur on a Unix system (including Linux and Mac
OSX), than on Windows systems. This has been fixed.
A work-around is to ensure that all connections are disconnected prior to
calling db_fini();
================(Build #3661 - Engineering Case #495001)================
If an application made concurrent connections with both YES and NO boolean
parameter values, the application could have crashed or the boolean connection
parameters could have been interpreted incorrectly. This has been fixed.
================(Build #3658 - Engineering Case #494414)================
An application could have crashed if an invalid protocol option was used
in the connection string (for example links=tcpip(invalid=value) ). This
has been fixed.
================(Build #3622 - Engineering Case #488274)================
Applications attempting to make a TLS connection may have crashed. This has
been fixed.
================(Build #3615 - Engineering Case #487269)================
The CE 5.0 client libraries, qaagent, dbmlsync and dblsn could crash when
shutting down. This has been fixed.
================(Build #3592 - Engineering Case #484196)================
On Unix systems, transfers of large blobs to the server over TCP/IP may have
been slower than expected. This would have been especially noticeable on
1 Gbps networks. This has been fixed.
================(Build #3592 - Engineering Case #483533)================
If the LOGFILE connection parameter was specified, when connecting to either
a personal or network server without specifying a server name, the line "Connected
to the default personal server" was logged. This was inaccurate, and
possibly confusing. The text of this message has now been changed to "Connected
to the default SharedMemory server."
================(Build #3568 - Engineering Case #481289)================
Data retrieved into DT_NSTRING embedded SQL host variables were not blank
padded for blank padded databases. This has been fixed.
================(Build #3551 - Engineering Case #477420)================
The behaviour of setting indicator variables when no rows were fetched was
changed in the dblib library from that of previous versions. While the behaviour
in this case was considered to be undefined, the behaviour has been reverted
back to the original behaviour.
================(Build #3489 - Engineering Case #467873)================
The dblib functions db_change_char_charset() and db_change_nchar_charset()
may not have set the error correctly if they failed, and an error was already
set from the last request. This has been fixed so that the error is now set
correctly.
================(Build #3474 - Engineering Case #464881)================
If an application initialized, finalized, and then re-initialized a client
library, making a connection could possibly have caused the application to
crash. How the client library is initializing and finalizing varies from
API to API. For DBLib, this is done with db_init and db_fini. For ODBC
with a Driver Manager, this is done when connecting when there are no existing
connections from the application, and when disconnecting, when the connection
being disconnected was the only connection from the application. This has
now been fixed so that the application will not crash when making a connection.
================(Build #3474 - Engineering Case #463748)================
On Unix systems, attempting to connection without including a UserID in the
connection string would have failed with the error "User ID '???' does
not exist". This has been fixed to give the error "Invalid user
ID or password".
================(Build #3419 - Engineering Case #467528)================
Applications running on AIX systems, and providing the CPORT option in the
connection string, would have failed to connect if IPv6 was enabled. This
has now been fixed.
================(Build #3419 - Engineering Case #465848)================
On MacOS systems, trying to connect to a server running on the same machine
as the client over the IPv6 loopback address would have failed. This has
been fixed.
================(Build #4164 - Engineering Case #652739)================
If an application prepared a batch insert using the SQL Anywhere JDBC driver,
and the last row in the batch involved a call to setNull() and the datatype
passed to setNull() was different than the previous set of setX calls for
that column, then there was a chance the JDBC driver would have inserted
incorrect data. This problem has now been fixed.
For example, the following set of calls would have inserted incorrect data
into the table test:
PreparedStatement pstmt = con.prepareStatement( "insert into test values(?,?)"
);
pstmt.setInt(1, 1001);
pstmt.setString(2, "this is row #1" );
pstmt.addBatch();
pstmt.setInt(1, 2001);
pstmt.setString(2, "this is row #2" );
pstmt.addBatch();
pstmt.setInt(1, 3001);
pstmt.setString(2, "this is row #3" );
pstmt.addBatch();
// note the fact that we are switching datatypes below
pstmt.setNull(1, java.sql.Types.SMALLINT);
pstmt.setString(2, "this is row #4" );
pstmt.addBatch();
pstmt.executeBatch();
Again, note that this problem would not have occurred if instead of using
java.sql.Types.SMALLINT, the application instead used java.sql.Types.INTEGER.
In addition, if the call to setNull() was not in the last row of the batch,
then again this problem would not have occurred, even if the application
switched datatypes for the setNull() call.
================(Build #4124 - Engineering Case #642015)================
Due to an uninitialized variable in the iAnywhere JDBC driver, applications
using the driver (such as the MobiLink server) could have crashed when trying
to access a result set. This problem has now been fixed.
================(Build #4111 - Engineering Case #638273)================
While connected using the SQL Anywhere or iAnywhere JDBC drivers, attempting
to use setNull() in a batch update may have caused the JDBC driver to throw
a datatype mismatch SQLException if the datatype specified within the setNull()
call differed from other non-null set calls to the same column within the
batch update. This problem has now been fixed and the datatype mismatch will
now only be thrown if a non-null set call of a different type is made on
the same column within a batch update.
================(Build #4029 - Engineering Case #619037)================
If an application opened multiple database metadata result sets, and the
application closed the metadata result sets appropriately, there was still
a chance that the iAnywhere JDBC driver would have closed one of the open
metadata result sets, even though the application had not reached the limit
of 3 metadata result sets open at any given time. This problem has now been
fixed.
================(Build #4025 - Engineering Case #618212)================
If an application connected using the iAnywhere JDBC driver, and then subsequently
called one of the read() overloads of ResultSet.getBlob().getBinaryStream(),
if the blob value was a non-NULL zero length long binary value, then the
read() method would have incorrectly returned 0, instead of -1 to signal
the end of the input stream. This problem has now been fixed.
Note, this problem was introduced by the changes for Engineering case 609739.
================(Build #4001 - Engineering Case #609739)================
When an application retrieved a Blob object by calling ResultSet.getBlob(),
and the application subsequently retrieved the Blob's InputStream by calling
Blob.getBinaryStream(), the applications performance would have been severely
impacted if the application called InputStream.read( byte[] ) or InputStream.read(
byte[], int, int ) on the Blob InputStream. This problem has now been fixed.
Note that a workaround is to use Blob.getBytes() directly, instead of using
the Blob InputStream.
================(Build #3999 - Engineering Case #610533)================
If an application retrieved a ResultSet via a DatabaseMetaData call, and
the application subsequently retrieved the underlying Statement object of
that ResultSet by calling ResultSet.getStatement(), then attempting to close
that DatabaseMetaData Statement object could have crashed the application.
The problem with closing DatabaseMetaData Statement objects has now been
fixed.
Note that in general, applications do not explicitly need to close DatabaseMetaData
Statement objects; hence the chances of an application crashing due to this
problem are rare. Closing the ResultSet of a DatabaseMetaData call is not
uncommon and not affected by this.
================(Build #3996 - Engineering Case #609736)================
If an application had a connection that was holding on to table locks, and
the same application had other connections that were blocked in the server
waiting for the table locks to be released, then there was a chance the application
would have hung if the connection holding on to the table locks subsequently
called Connection.createStatement(). This problem has now been fixed.
================(Build #3992 - Engineering Case #608106)================
If an application called ResultSet.getObject() on a tinyint column, and the
column value ranged from 128 to 255, then the JDBC driver would have incorrectly
thrown a SQLException with a conversion error. This problem has now been
fixed.
================(Build #3975 - Engineering Case #594327)================
If an application called the method DatabaseMetaData.getDatabaseProductVersion()
on a closed Connection object, then the iAnywhere JDBC Driver would have
thrown a NullPointerException, instead of returning the appropriate SQLException.
This problem has now been fixed.
================(Build #3975 - Engineering Case #593347)================
An application connected using the iAnywhere JDBC Driver, and calling the
method PreparedStatement.setBlob() to insert a blob of length between 64M
and 256M, would have seen the insert take much longer than if the application
used the method PreparedStatement.setBinaryStream() instead. This problem
has now been fixed, and in addition, also improves the performance of using
PreparedStatement.setBinaryStream().
Note that using setBlob() requires significantly less memory than using
setBinaryStream(), and also, for blob values greater than 256M in size, setBlob()
may actually be the only option.
================(Build #3942 - Engineering Case #585013)================
If an application executed Connection.prepareStatement(), or Connection.prepareCall(),
on one connection, and the prepare request took a long time, then attempting
to execute Connection.createStatement(), Connection.prepareStatement(), or
Connection.prepareCall() on a different connection would have ended up blocking
until the original prepare returned. This problem has now been fixed.
================(Build #3924 - Engineering Case #581029)================
The fix for Engineering case 571029 would have caused the Interactive SQL
utility to give an "optional feature not implemented" error when
exporting data to Excel. This problem has now been fixed.
================(Build #3924 - Engineering Case #580599)================
The SQL Anywhere JDBC driver getTimeDateFunctions() call did not return the
correct names for the CURRENT_DATE, CURRENT_TIME, and CURRENT_TIMESTAMP functions.
It returned "current date,current time,current timestamp" instead
of "current_date,current_time,current_timestamp". The same problem
also existed in the iAnywhere JDBC driver. These problems have now been fixed.
================(Build #3924 - Engineering Case #578990)================
If an application called DatabaseMetaData.getSystemFunctions(), the string
returned would have contained the functions named dbname and username. The
correct function names are database and user. This problem has now been fixed.
================(Build #3921 - Engineering Case #580174)================
If an application using the iAnywhere JDBC driver, created a scrollable,
updateable Statement or PreparedStatement, created a ResultSet object off
of the Statement or PreparedStatement object, called ResultSet.updateRow()
to perform a positioned update of a row in the ResultSet, and then positioned
the ResultSet to a row before the updated row, attempting to call next()
to go beyond the updated row would have failed. A similar problem exists
if the application positions the ResultSet beyond the updated row and then
tries to call previous(). Both problems have now been fixed.
================(Build #3892 - Engineering Case #571625)================
If an application called DatabaseMetaData.getCatalogs(), DatabaseMetaData.getSchemas()
or DatabaseMetaData.getTableTypes(), then the JDBC driver would have leaked
a very small amount of memory. This problem has now been fixed.
================(Build #3892 - Engineering Case #571624)================
If an application executed a query that generated a warning, and that warning
was attached to a Statement, PreparedStatement, CallableStatement or ResultSet
object, and the object was subsequently closed without calling clearWarnings()
first, then the JDBC driver would have leaked memory. This problem has now
been fixed.
================(Build #3890 - Engineering Case #571029)================
If an application attempted to execute a batch with a long binary or long
varchar column, and the values within the long columns were large, and the
batch size was also reasonably large, then the iAnywhere JDBC driver may
have given an 'out of memory' dialog, even though the Java VM still had lots
of memory available. This problem has now been fixed.
================(Build #3890 - Engineering Case #570903)================
In very rare circumstances it was possible for the SQL Anywhere JDBC driver
to have caused a crash in the Java VM. The Hotspot log generated for the
crash would most likely have indicated that the crash occurred while attempting
to construct a class cast exception. This has been fixed.
================(Build #3883 - Engineering Case #569316)================
If an application connected using the iAnywhere JDBC driver and created a
very large batch, containing either long binary or long varchar parameters,
then executing the batch may have given a strange out of memory error dialog
after which the application would have crashed. The driver has now been modified
to allow such large batches to be executed; however, any such batches that
require a very large amount of contiguous memory to be allocated will be
executed one row at a time, instead of being batched. In addition, whenever
the driver decides to execute a batch one row at a time, a SQLWarning will
be returned on the executeBatch() call indicating that the "batch was
executed in single row mode due to memory limitations".
================(Build #3860 - Engineering Case #563736)================
If an ODBC DSN explicitly specified an isolation level, and that DSN was
then used within the Interactive SQL utility (dbisql), then the isolation
level specification would have been ignored. This problem has been fixed.
================(Build #3835 - Engineering Case #556757)================
If a JDBC application called ResultSetMetaData.getColumnTypeName() with an
invalid column index, then the application may have crashed. This problem
has been fixed.
================(Build #3827 - Engineering Case #554764)================
When running on heavily loaded Unix systems, applications using a large number
of connections may have crashed. This has been fixed.
================(Build #3817 - Engineering Case #552888)================
If an application called PreparedStatement.setBlob(), and if the underlying
java.sql.Blob implementation misbehaved, then there was a chance that the
iAnywhere JDBC driver would have failed when the PreparedStatement was executed.
Workarounds have now been implemented in the iAnywhere JDBC driver to attempt
to handle the Blob implementation misbehaviour.
================(Build #3814 - Engineering Case #552314)================
If an application called the ResultSet.getBlob() method, and if fetching
the blob should have thrown a valid SQLException, then quite often the exception
would not have been thrown and an empty blob would have instead been returned
to the client. This problem has now been fixed.
================(Build #3811 - Engineering Case #551978)================
If an application called ResultSet.getString() on a uniqueidentifier column,
the iAnywhere JDBC driver would have incorrectly returned a string of binary
bytes. This problem has now been fixed and the iAnywhere JDBC driver now
returns a properly formatted GUID string.
================(Build #3794 - Engineering Case #549218)================
If an application creates a scrollable statement and then scrolls through
the result set, calling ResultSet.getRow() will return the correct row number
for all rows. Calling ResultSet.getRow() will also return the correct row
number when the end of the result set is reached. However, if the application
called ResultSet.isLast() while positioned on the last row of the result
set and then called ResultSet.getRow(), the row number returned would have
been "invalid" or "unknown". This problem has now been
fixed.
Note that calling ResultSet.getRow() after calling ResultSet.isLast() while
positioned on any row other than the last row would have returned the correct
row number.
================(Build #3790 - Engineering Case #548322)================
If an application calls PreparedStatement.executeBatch(), the iAnywhere JDBC
driver is supposed to return an integer array that contains a status for
each row in the batch. The iAnywhere JDBC driver was instead returning an
integer array containing only two elements. This problem has now been fixed.
================(Build #3785 - Engineering Case #544626)================
The iAnywhere JDBC driver may have crashed when allocating a new statement,
and the Java VM was out of memory. This has been fixed. The driver will now
either fail gracefully, or assert, depending on the circumstances.
================(Build #3779 - Engineering Case #546290)================
When connected to a DB2 Mainframe (D2M) database, the iAnywhere JDBC driver
could have eld locks across COMMITs, causing increased contention and sometimes
resulting in deadlock or timeout errors. This has been fixed.
================(Build #3777 - Engineering Case #545772)================
If an application was connected using the iAnywhere JDBC driver, and the
application subsequently executed a statement that returned more than one
result set, then attempting to fetch any result set after the first would
have failed with a function sequence error. This problem would only have
appeared once the fix for Egineering case 533936 was applied. This has now
been fixed.
================(Build #3766 - Engineering Case #533974)================
When calling a Java stored procedure from the Interactive SQL utility with
a SQL argument of type DATE, TIME or TIMESTAMP and a Java argument of type
java.sql.Date or java.sql.Time, the server would have returned a parse exception
from the JVM. Using dbisqlc to make the same Java stored procedure call would
have worked fine. This problem has now been fixed.
================(Build #3762 - Engineering Case #543541)================
If an application was connected via the iAnywhere JDBC driver, and the application
had made DatabaseMetaData calls, then calling Connection.close() would have
given a NullPointerException if the server has already closed the connection.
This problem has now been fixed.
================(Build #3761 - Engineering Case #543397)================
If an application was connected via jConnect and attempted to query the column
metadata of a table containing a DATE or TIME column, then the server would
have incorrectly returned the error -186 'Subquery cannot return more than
one row'. When support for the TDS DATE and TIME datatypes was added, the
metadata information in spt_jdatatype_info was not properly updated. This
has now been fixed.
================(Build #3756 - Engineering Case #542528)================
An application using the iAnywhere JDBC driver may have in rare cases received
a Null Pointer Exception when calling Connection.close(). This problem has
now been fixed.
================(Build #3750 - Engineering Case #541420)================
In very rare circumstances, likely under heavy concurrency, the iAnywhere
JDBC driver could have crashed. This has been fixed.
================(Build #3743 - Engineering Case #536921)================
Applications using the iAnywhere JDBC driver, could have crashed or hung
if a process ran out of memory. This has been fixed so that it will now either
fail gracefully, or cause an assertion with an appropriate error message,
depending on the circumstances.
================(Build #3737 - Engineering Case #538904)================
If an application using the iAnywhere JDBC driver called the method ResultSet.isLast()
when the result set was positioned on the last row, the call would have correctly
returned a value of 'True', but the result set position would have been move
to be after the last row. The problem has now been fixed.
Note, this problem does not occur if the result set is not positioned on
the last row.
================(Build #3735 - Engineering Case #539289)================
Applications using iAnywhere JDBC driver may have hung when calling the method
ResultSet.get*(). This has been fixed.
Note, this problem was introduced with the changes for Engineering case
533936.
================(Build #3735 - Engineering Case #539094)================
If an application was using the iAnywhere JDBC driver to generate one or
more result sets by making DatabaseMetaData calls, then there was a chance
the DatabaseMetaData result sets would not have been garbage collected until
connection close time. Note that at most 3 of these result sets would have
remained open until connection close. This problem has now been fixed.
================(Build #3735 - Engineering Case #539077)================
The changes for Engineering case 533936 introduced a problem where calling
the JDBC method Statement.cancel() did not work for queries that did not
return a result set. This has been fixed.
================(Build #3734 - Engineering Case #537147)================
If an application prepared a non-INSERT statement (i.e. a DELETE or an UPDATE
statement) and used PreparedStatement.addBatch() followed by PreparedStatement.executeBatch()
to execute a wide DELETE or wide UPDATE, then the results of executing the
batch were unpredictable. Prepared batches were intended to be used for wide
inserts only, but the iAnywhere JDBC driver did not restrict the usage to
wide inserts. The driver now imposes this restriction and will throw an exception
if an application attempts to use addBatch()/executeBatch() on a non-INSERT
statement.
================(Build #3732 - Engineering Case #538694)================
An application that did not explicitly close Statement, PreparedStatement
or CallableStatement objects may, in rare cases, have crashed when closing
a Connection object. This problem has now been fixed.
================(Build #3728 - Engineering Case #537747)================
If an application used the iAnywhere JDBC driver with either a Microsoft
SQL Server, or DB2, ODBC driver, and the application did not explicitly close
Statement or PreparedStatement objects, then it was possilbe that the application
would have hung at garbage collection time. This problem has now been fixed.
================(Build #3723 - Engineering Case #533936)================
In rare situations, Java applications using the iAnywhere JDBC driver with
concurrent connections may have hung, or even crashed. Several fixes have
been made to correct race conditions between concurrent connections.
================(Build #3721 - Engineering Case #535849)================
An application using the iAnywhere JDBC driver was not able to change the
connection isolation level to one of the SA Snapshot Isolation levels. This
problem has now been resolved. To use one of the SA Snapshot Isolation levels,
the application can now call the Connection.setTransactionIsolation method
with one of the following values:
for applications using ianywhere.ml.jdbcodbc.IDriver, use:
ianywhere.ml.jdbcodbc.IConnection.SA_TRANSACTION_SNAPSHOT
ianywhere.ml.jdbcodbc.IConnection.SA_TRANSACTION_STATEMENT_SNAPSHOT
ianywhere.ml.jdbcodbc.IConnection.SA_TRANSACTION_STATEMENT_READONLY_SNAPSHOT
for applications using ianywhere.ml.jdbcodbc.jdbc3.IDriver, use:
ianywhere.ml.jdbcodbc.jdbc3.IConnection.SA_TRANSACTION_SNAPSHOT
ianywhere.ml.jdbcodbc.jdbc3.IConnection.SA_TRANSACTION_STATEMENT_SNAPSHOT
ianywhere.ml.jdbcodbc.jdbc3.IConnection.SA_TRANSACTION_STATEMENT_READONLY_SNAPSHOT
================(Build #3714 - Engineering Case #533604)================
If a multi-threaded JDBC application generated a ResultSet object on one
thread, at about the same time that the underlying Statement object was closed
on another thread, then the application may in very rare cases have crashed,
if that ResultSet object was subsequently closed. The same problem could
have occurred if the ResultSet object was generated at about the same time
that the underlying Connection object was closed. This problem has now been
fixed.
================(Build #3710 - Engineering Case #532802)================
Closing a ResultSet object may have, in very rare cases, crashed the iAnywhere
JDBC driver. This problem has now been fixed.
================(Build #3705 - Engineering Case #531718)================
If an application using the iAnywhere JDBC driver attempted to perform a
batched insert (using addBatch()/executeBatch()) and the batch size was large
(greater than 500), then performance of the batch insert would have degraded
significantly when long string columns were involved in the batch. The driver
was allocating more memory than necessary, and making several small allocations
instead of a few large ones. This problem has now been corrected.
================(Build #3704 - Engineering Case #531962)================
If an application attempted to use a Statement, PreparedStatement or ResultSet
object at the same time that the underlying Connection object was closed
on a different thread, then there was a chance the application would have
crashed.The problem has now been fixed.
Note that finalizing Connection objects can cause the same crash.
================(Build #3703 - Engineering Case #530596)================
If a multi-threaded JDBC application attempted to make a connection on one
thread while the Java VM was shutting down, there was a chance that the application
would have crashed. Note that this problem was specific to Unix platforms
only. The problem has now been fixed.
================(Build #3699 - Engineering Case #500125)================
When using the JDBC 3.0 version of the iAnywhere JDBC Driver and calling
the method DatabaseMetaData.getColumns(), a result set with only 18 columns
would have been returned, instead of 22 columns. Note that the extra 4 columns
are in effect meaningless since they provide metadata for Ref types which
are not supported in the iAnywhere JDBC driver. Nevertheless, the problem
has now been fixed and the method now returns a result set with 22 columns.
Using the JDBC 2.0 version of the iAnywhere JDBC Driver will continue to
return a result set with 18 columns as expected.
================(Build #3697 - Engineering Case #530594)================
Closing a ResultSet object may, in very rare cases, crash the iAnywhere JDBC
driver. This problem has now been fixed.
================(Build #3690 - Engineering Case #528330)================
If an application closes a connection that had statements or prepared statements
open, then there was a very small possibility that the application would
have crashed. This has now been fixed.
================(Build #3678 - Engineering Case #498183)================
In very rare circumstances, the iAnywhere JDBC driver may have crashed when
a connection was being closed. This problem has now been fixed.
================(Build #3673 - Engineering Case #496899)================
An application using the iAnywhere JDBC driver would have leaked memory when
executing statements that returned multiple result sets. Executing statements
that return a single result set will not be affected by this problem. The
driver was failing to implicitly close pending result sets for statements
that returned multiple result sets. This has now been fixed.
================(Build #3673 - Engineering Case #496897)================
An application using the iAnywhere JDBC driver would have leaked memory when
making multiple DatabaseMetaData calls. The driver was not releasing references
to Java strings. This problem has been fixed.
================(Build #3670 - Engineering Case #496438)================
An application using the iAnywhere JDBC driver, that had many threads that
were opening and closing connections or statements, may have crashed during
garbage collection. This problem has now been fixed.
================(Build #3662 - Engineering Case #482591)================
If an application using the iAnywhere JDBC driver had multiple threads trying
to create connections, or create statements/prepared statements/callable
statements at the same time, then there was a chance the JDBC driver could
have crashed. The crashes were actually in the Java VM, and work arounds
have now been implemented.
================(Build #3572 - Engineering Case #481450)================
On Unix systems, if a client application left TCPip connections open when
dblib, or the ODBC driver, was unloaded (for instance, at application shutdown
time), the application may have crashed. There was a higher chance of seeing
this on multi-processor machines. This has been fixed.
================(Build #3568 - Engineering Case #481235)================
The iAnywhere JDBC driver could have incorrectly given conversion errors
when fetching numeric data. For the problem to have occurred, the numeric
column had to follow a long char or varchar, or a long binary or long varbinary
column in the result set. This problem was not likely to have occurred using
the SQL Anywhere ODBC driver or the DataDirect ODBC drivers, but could have
occurred using the iAywhere Oracle ODBC driver. The problem has now been
fixed.
================(Build #3564 - Engineering Case #480937)================
If an application using the iAnywhere JDBC Driver fetched an unsigned tinyint
value in the range of 128 to 255, then the JDBC driver would have incorrectly
thrown an overflow exception. This problem has been fixed by having the JDBC
driver promote unsigned tinyint values to smallint.
================(Build #3564 - Engineering Case #480204)================
If an application called the methods ResultSet.getTime(), ResultSet.getDate()
or ResultSet.getTimestamp() using a java.util.Calendar object, then the iAnywhere
JDBC driver would have returned the wrong result. This problem has now been
fixed.
================(Build #3549 - Engineering Case #477892)================
If an application was using the iAnywhere JDBC driver and called ResultSet.getBlob().getBytes()
on a char, varchar or binary column that had an empty value in it, then the
driver would have incorrectly returned null, instead of returning a 0-length
byte [] value. This problem did not occur for empty long varchar or long
binary columns. This problem has now been fixed.
================(Build #3525 - Engineering Case #474305)================
If both the 64-bit and 32-bit SQL Anywhere client software were installed
on the same system, and an application attempted to use the iAnywhere JDBC
driver, it was possible that the driver would have failed with an UnsatisfiedLinkError
exception. This problem could only have occurred if a 32-bit Java VM was
being used and the 64-bit SQL Anywhere libraries appeared first in the path,
or a 64-bit Java VM was being used and the 32-bit SQL Anywhere libraries
appear first in the path, and the SQL Anywhere installation path has at least
one space in it. In particular, if there was no space in the installation
path, then the bitness of the VM, and the order of the libraries in the path,
did not matter and should not have caused any UnsatisfiedLinkError exceptions.
This particular problem has now been fixed.
================(Build #3506 - Engineering Case #471575)================
If an application used the iAnywhere JDBC Driver to connect to a DB2 server,
and attempted to use PreparedStatement.setBoolean(), the JDBC driver would
have thrown an exception. Calling PreparedStatement.setBoolean() resulted
in the driver binding a parameter of type SQL_BIT, even though DB2 does not
support SQL_BIT. This has been fixed so that a parameter of type SQL_TINYINT
is now used for DB2.
================(Build #3500 - Engineering Case #469827)================
The server may have erroneously reported the error "Right Truncation
of string data" when data was converted to CHAR or VARCHAR as part of
a query (eg a CAST operation). This would have occurred if the database was
created with a multi-byte collation, and a column contained NCHAR data of
sufficient length (depending on the collation in use, but always at least
8192 bytes). This has now been fixed.
================(Build #3484 - Engineering Case #466700)================
When using the IBM DB2 ODBC driver with the iAnywhere JDBC driver to try
to fetch rows from a result set, the iAnywhere JDBC driver would have have
crashed if the result set had a CLOB column. This has now been fixed.
================(Build #3478 - Engineering Case #464442)================
The iAnywhere JDBC Driver could have leaked memory, and exhausted memory
heaps, if an application causes many SQLWarnings to be generated. This problem
has now been fixed.
================(Build #4181 - Engineering Case #657826)================
When using the Microsoft ODBC Data Source Administrator, a crash may have
resulted if an encrypted password had been specified and the "Test Connection"
button was used. This has been fixed.
================(Build #4139 - Engineering Case #645952)================
The ODBC functions SQLColumns and SQLProcedureColumns incorrectly returned
a NULL value for the CHAR_OCTET_LENGTH column for XML and ST_GEOMETRY data
types. ST_GEOMETRY is a new data type supported by SQL Anywhere in version
12. This has been fixed and the correct value of 2147483647 is now returned.
================(Build #4139 - Engineering Case #645948)================
The LITERAL_PREFIX and LITERAL_SUFFIX characters returned by SQLGetTypeInfo
for binary data types were apostrophes. If these characters were used in
an INSERT statement, the value stored was incorrect.
For example: Store binary 0x1234 into column.
INSERT INTO test (binary_col) VALUES ('1234');
The result is 0x31323334.
If the LITERAL_PREFIX was 0x and the LITERAL_SUFFIX was NULL, then the value
stored was correct.
INSERT INTO test (binary_col) VALUES (0x1234);
This problem has been corrected. The following types will now return 0x
in the LITERAL_PREFIX column and NULL in the LITERAL_SUFFIX column:
long binary
varbinary
binary
================(Build #4111 - Engineering Case #638900)================
If a certain statement was prepared and described that returned no result
set, and then a DDL statement caused the same statement to return a result
set, client statement caching could have caused the statement to be redescribed
incorrectly. This was a client statement caching peformance optimization,
and before this change, there was no way to disable this incorrect behavior.
For example, the following statements executed in dbisql would have returned
an error on the second call foo() statement:
create or replace function foo() returns int begin return 1; end;
call foo();
create or replace procedure foo() begin select 2; end;
call foo();
This has been fixed so that if client statement caching is disabled by setting
the max_client_statement_cached option to 0 for the connection, such a statement
is now described correctly.
================(Build #4111 - Engineering Case #637743)================
Calls to SQLGetTypeInfo() would have returned the wrong UNSIGNED_ATTRIBUTE
column value for TINYINT. The TINYINT datatype is an unsigned type so the
column should have contained a 1 rather than a 0. This problem has been fixed
so that the UNSIGNED_ATTRIBUTE column result now agrees with the result returned
by SQLColAttribute(SQL_DESC_UNSIGNED) for a TINYINT column.
================(Build #4093 - Engineering Case #634189)================
If a connection string was made up of parameters coming from the connection
string and from the data source, and the UID and PWD/ENP parameters were
not all in the connection string or all in the data source, the PWD/ENP parameters
would have been ignored. For example, if DSN "foo" contained a
UID but no PWD, then the connection string "DSN=foo;PWD=secret"
would ignore the PWD field. This has been fixed.
================(Build #4064 - Engineering Case #627634)================
The ODBC driver did not support setting of the SQL_ATTR_METADATA_ID attribute
for connections using SQLSetConnectAttr(). This setting governs how the string
arguments of catalog functions are treated. However, the driver does support
this attribute at the statement level using SQLSetStmtAttr(). For SQLSetConnectAttr(),
the ODBC driver returned a "driver not capable" error. This problem
has been corrected.
If the setting for SQL_ATTR_METADATA_ID is SQL_TRUE, the string argument
of catalog functions are treated as identifiers. The case is not significant.
For nondelimited strings, the driver removes any trailing spaces and the
string is folded to uppercase. For delimited strings, the driver removes
any leading or trailing spaces and takes literally whatever is between the
delimiters.
If the setting is SQL_FALSE, the string arguments of catalog functions are
not treated as identifiers. The case is significant. They can either contain
a string search pattern or not, depending on the argument. The default value
is SQL_FALSE.
The following example changes the default setting for the entire connection.
rc = SQLSetConnectAttr( hdbc, SQL_ATTR_METADATA_ID, (SQLPOINTER)SQL_TRUE,
SQL_IS_UINTEGER );
This setting is important for case-sensitive databases. For example, the
table name "customers" does not match the table name "Customers"
in a function such as SQLPrimaryKeys() unless the SQL_ATTR_METADATA_ID attribute
has been set to SQL_TRUE.
================(Build #4038 - Engineering Case #619613)================
The ODBC functions SQLPrimaryKeys() and SQLForeignKeys() would have returned
an incorrect name for the primary key identifier (PK_NAME). It should return
the PRIMARY KEY constraint name, but was returning the table name (the "default"
primary key name). This problem has been fixed.
================(Build #4023 - Engineering Case #616385)================
When making continuous ODBC connections and disconnections, using the ANSI
entry points to SQLConnect() and SQLDisconnect(), a memory leak would have
occurred in the application. The UNICODE versions of SQLConnect() and SQLDisconnect()
(i.e., SQLConnectW() ) were not affected by this problem. The process heap
will continue to grow as the application loops. This problem has been fixed.
See also Engineering case 608095.
================(Build #4003 - Engineering Case #611168)================
ODBC drivers support standard escape sequences in bound data as a driver-independent
way to specify date and time literals, outer joins and procedure calls.
The SQL Anywhere driver was failing to recognize these escape sequences when
embedded in a bound parameter that contained a Unicode string. This has
been fixed.
================(Build #3996 - Engineering Case #608728)================
If a connection string was made up of parameters coming from different sources
(i.e. the connection string itself, DSNs or file DSNs, SQLCONNECT environment
variable), and the UID and PWD/ENP parameters were not specified in the same
source, the PWD/ENP would have been ignored. For example, if DSN "foo"
contained a UID but no PWD, then the connection string "DSN=foo;PWD=secret"
would ignore the PWD field. This has been fixed.
================(Build #3973 - Engineering Case #593676)================
When using the SQL Anywhere ODBC driver, the transaction isolation level
can be set with a call to the ODBC function SQLSetConnectAttr(). The following
is an example for setting the transaction isolation level to "readonly-statement-snapshot":
SQLSetConnectAttr( dbc,
SA_SQL_ATTR_TXN_ISOLATION,
(SQLPOINTER)SA_SQL_TXN_READONLY_STATEMENT_SNAPSHOT,
SQL_IS_INTEGER );
The following isolation level options are available.
SQL_TXN_READ_UNCOMMITTED
SQL_TXN_READ_COMMITTED
SQL_TXN_REPEATABLE_READ
SQL_TXN_SERIALIZABLE
SA_SQL_TXN_SNAPSHOT
SA_SQL_TXN_STATEMENT_SNAPSHOT
SA_SQL_TXN_READONLY_STATEMENT_SNAPSHOT
When any of the "snapshot" isolation levels were selected, the
ODBC driver would have set the transaction isolation level incorrectly upon
connecting to the server. The following is an example of a SET statement
that was executed:
SET TEMPORARY OPTION isolation_level = readonly-statement-snapshot;
This would have resulted in a syntax error and the server options would
not have been changed. This problem has been fixed. The ODBC driver will
now generate the following correct syntax with quotes around the isolation
level value.
SET TEMPORARY OPTION isolation_level = 'readonly-statement-snapshot';
================(Build #3969 - Engineering Case #592784)================
Calls to the ODBC function SQLGetTypeInfo() always returned 0 in the CASE_SENSITIVE
metadata column for the XML data type. For case-sensitive databases, string
comparisions of columns and variables of type XML is case sensitive. Therefore,
SQLGetTypeInfo() has been fixed to returned 1 in this case.
================(Build #3953 - Engineering Case #587036)================
If a LONG VARCHAR column containing multi-byte data was fetched by the OLE
DB provider, the resulting string may have contained appended null characters.
As a result, the Length attribute of the string (e.g., strTxt.Length) may
have been larger than expected. This problem has been fixed.
================(Build #3927 - Engineering Case #581793)================
If a SQL_NUMERIC or SQL_DECIMAL value had a precision sufficiently greater
than the bound precision, then an internal buffer would have been overrun.
For example, if a bound column was defined as DECIMAL(10,2), and the precision
described by the input SQL_NUMERIC_STRUCT parameter value was 40, then an
internal buffer would have been overrun. This problem has been fixed.
================(Build #3921 - Engineering Case #579219)================
Calling the function SQLGetInfo() function with the SQL_SQL92_STRING_FUNCTIONS
information type returns a bitmask indicating which SQL 92 functions are
supported by the database server.
The following bits should not be returned:
SQL_SSF_CONVERT
SQL_SSF_SUBSTRING
SQL_SSF_TRIM_BOTH
SQL_SSF_TRIM_LEADING
SQL_SSF_TRIM_TRAILING
as SQL Anywhere does not support the above SQL 92 functions (e.g., SELECT
TRIM(BOTH ' ' FROM ' abc ').
Only the following following bits should be returned.
SQL_SSF_LOWER
SQL_SSF_UPPER
This problem has been fixed.
================(Build #3915 - Engineering Case #579816)================
A new connection parameter has been added to the ODBC driver. This parameter
(ESCAPE) allows you to specify the escape character used in the LIKE clause
of SQL statements generated by the ODBC driver when returning a list of tables
or columns.
By default the ODBC driver uses the tilde character (~), but some applications
do not properly query the ODBC driver to see what escape character is used,
and assume that it is the backslash character (\). With this new connection
parameter, users can specify the escape character in the connection string
to make these applications work properly.
"DSN=My Datasource;UID=xxx;PWD=xxx;ESCAPE=\;ENG=myserver"
================(Build #3900 - Engineering Case #574746)================
If an unknown connection parameter was provided to the ODBC driver, the driver
would have given the error: "Parse Error: Invalid or missing keyword
near '<unknown parameter>'". Some applications (for example,
Crystal Reports in some cases) generate connection parameters which are unknown
to SQL Anywhere, which resulted in connection failures. This has been fixed
so that unknown connection parameters are now ignored. This returns the ODBC
driver to the Adaptive Server Anywhere 9.x and earlier behavior.
================(Build #3899 - Engineering Case #573990)================
When calling the SADataReader.GetBytes method into a binary or varbinary
column with a null buffer, it would have thrown the exception "no current
row of cursor". This problem has been fixed.
================(Build #3866 - Engineering Case #565110)================
When binding a column in ODBC, the column type, a buffer and a length are
specified. For static types (int, long, double, time, etc) the ODBC driver
ignores the length specified because the length is constant. The ODBC driver
should have been treating SQL_GUID columns as static as well, but was incorrectly
respecting the length specified, which sometimes resulted in truncated values.
This has been fixed.
================(Build #3865 - Engineering Case #565054)================
Calling the ODBC functiom SQLColAttribute( ..., SQL_DESC_BASE_COLUMN_NAME,
... ) could have incorrectly returned the correlation name, instead of the
base column name. This has been fixed so that the base column name is returned.
If there is no base column, the column alias is returned.
Note, this fix requires both an updated ODBC driver and server.
================(Build #3860 - Engineering Case #563848)================
If an ODBC application called SQLColAttribute() on a long varchar, long nvarchar
or long binary column, with an attribute value of SQL_DESC_DISPLAY_SIZE,
then the ODBC driver would have incorrectly returned the value 65535. This
problem has been fixed and the driver now returns the value 2147483647.
================(Build #3847 - Engineering Case #549800)================
The SQL Anywhere ODBC driver incorrectly described a column of type NCHAR
as SQL_WVARCHAR (-9), instead of SQL_WCHAR (-8), when the odbc_distinguish_char_and_varchar
database option was set 'off'. In the following SQL statement, the two columns
should be described as SQL_WCHAR and SQL_WVARCHAR respectively.
select cast('abc' as nchar),cast('abc' as nvarchar)
This problem did not affect calls to SQLColumns(), but it did affect calls
to SQLDescribeCol(). This problem has been fixed.
The odbc_distinguish_char_and_varchar option is intended for CHAR columns
only. It is provided for backwards compatibility with older versions of the
SQL Anywhere ODBC driver. For backwards compatibility, the odbc_distinguish_char_and_varchar
option is set 'off' by default. When odbc_distinguish_char_and_varchar option
is set 'on', the ODBC driver will describe CHAR columns as SQL_CHAR, rather
than SQL_VARCHAR.
================(Build #3842 - Engineering Case #551956)================
When the TargetType parameter to the ODBC function SQLGetData() is SQL_C_DEFAULT,
the driver selects the default C data type based on the SQL data type of
the source.
For NCHAR types, the Microsoft ODBC driver selects a SQL_C_CHAR binding.
Microsoft acknowledges in its knowledgebase article http://support.microsoft.com/default.aspx/kb/293659
that to do so is an error, but states that it does this to support older
legacy applications. The SQL Anywhere ODBC driver selected a SQL_C_WCHAR
binding. This caused problems with applications like Microsoft Access, that
expected the Microsoft ODBC behaviour. This problem in Access could have
been seen when setting up a linked table using the SQL Anywhere ODC driver
to a database table containing NCHAR, NVARCHAR, or LONG NVARCHAR column types.
The resulting table columns were displayed with "#deleted" values.
To conform to the Microsoft ODBC driver behaviour, the SQL Anywhere ODBC
driver has been changed. Now, the SQL Anywhere ODBC driver selects a SQL_C_CHAR
binding for NCHAR, NVARCHAR, and LONG NVARCHAR (including NTEXT) types when
the "MS Applications ( Keys In SQLStatistics)" checkbox is selected
in the ODBC Administrator configuration dialog for SQL Anywhere datasources.
This option can also be specified as the connection parameter "KeysInSQLStatistics=YES"
in a SQL Anywhere ODBC driver connection string. This change affects any
ODBC function, like SQLBindCol, SQLBindParameter, and SQLGetData, where SQL_C_DEFAULT
can be specified.
================(Build #3839 - Engineering Case #557955)================
Calls the function SQLGetInfo() would have incorrectly returned SQL_CB_NULL
for SQL_CONCAT_NULL_BEHAVIOR. As the result of the server concatenating a
string and NULL is the string, calls to SQLGetInfo() have now been corrected
to return SQL_CB_NON_NULL for SQL_CONCAT_NULL_BEHAVIOR.
================(Build #3839 - Engineering Case #555236)================
SQLColumns returns a result set including COLUMN_SIZE, BUFFER_LENGTH, and
CHAR_OCTET_LENGTH among other things. For a column typed NCHAR(30) or NVARCHAR(30),
the COLUMN_SIZE was returned as 30 and the BUFFER_LENGTH as 30. The BUFFER_LENGTH
should have been 60. For CHAR_OCTET_LENGTH, NULL is returned, whereas it
should have ben 60 as well.
From the ODBC standand:
"COLUMN_SIZE": For character types, this is the length in characters
of the data; The defined or maximum column size in characters of the column
or parameter (as contained in the SQL_DESC_LENGTH descriptor field). For
example, the column size of a single-byte character column defined as CHAR(10)
is 10.
"BUFFER_LENGTH": The defined or the maximum (for variable type)
length of the column in bytes. This is the same value as the descriptor field
SQL_DESC_OCTET_LENGTH.
"CHAR_OCTET_LENGTH": The maximum length in bytes of a character
or binary data type column. For all other data types, this column returns
a NULL.
These problems have been fixed.
================(Build #3838 - Engineering Case #555059)================
When using Microsoft's AppVerifier (a development/debugging tool), the tool
may have reported various errors, such as the use of an invalid handle in
client libraries (ODBC, dblib, etc) or in the database server. These problems
have been fixed and the database server and client libraries now run cleanly
under AppVerifier.
Note, without the use of AppVerifier it would have been extremely rare for
users to encounter these problems.
================(Build #3833 - Engineering Case #555963)================
If an NCHAR, NVARCHAR, or LONG NVARCHAR parameter was bound as SQL_C_DEFAULT,
the binding would have defaulted to SQL_C_CHAR, instead of SQL_C_WCHAR. This
has been corrected.
================(Build #3833 - Engineering Case #555617)================
When calling SQLForeignKeys(), the result set was not returned in the correct
sort order. According to the ODBC standard, the sort orders are:
1. If the foreign keys associated with a primary key are requested, the
result set is ordered by FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, and KEY_SEQ.
2. If the primary keys associated with a foreign key are requested, the
result set is ordered by PKTABLE_CAT, PKTABLE_SCHEM, PKTABLE_NAME, and KEY_SEQ.
This has now been corrected.
================(Build #3820 - Engineering Case #554106)================
The ODBC driver would have issued an HY090 error if any of the connection
string parameter to SQLConnect were specified as a combination of a NULL
pointer and SQL_NTS. This situation is permitted by the ODBC specification.
As a result, this restriction has now been removed.
================(Build #3820 - Engineering Case #554080)================
Applications which used the EncryptedPassword connection parameter could
have crashed when attempting to connect. This has been fixed.
================(Build #3787 - Engineering Case #547905)================
If a SQL statement contained comments using %-style comments, then the ODBC
driver could have reported a syntax error when the comment contained an unmatched
quote.
For example:
% it's a contraction
The ODBC driver has to parse statements looking for ODBC escape sequences,
but did not handle %-style comments. This has been fixed.
================(Build #3773 - Engineering Case #545565)================
When an application called the ODBC fuction SQLTables() to get a list of
supported table types, the TEXT table type would not have been listed. In
addition, calling SQLTables() to list tables would have incorrectly listed
tables of type TEXT as LOCAL TEMPORARY. Similarly, when an application using
the iAnywhere JDBC driver called DatabaseMetaData.getTableTypes() to get
a list of supported table types, the TEXT table type would not have been
listed; and calling DatabaseMetaData.getTables() would have incorrectly identified
TEXT tables as LOCAL TEMPORARY. Both the ODBC driver and JDBC driver have
now been updated to properly list the TEXT table type and identify tables
of type TEXT correctly.
================(Build #3727 - Engineering Case #534949)================
The iAS ODBC driver for Oracle would have truncated the time portion of timestamp
values, when the application was trying to fetch the timestamp values using
SQLGetData. An internal buffer was too small for a timestamp column. This
has been fixed.
================(Build #3701 - Engineering Case #531532)================
If a connection was dropped, or the server went down with a fatal error while
connecting with ODBC, OLE DB or ADO.NET, or while calling the DBLib functions
db_change_char_charset or db_change_nchar_charset, the client application
could have crashed. This has been now been fixed.
================(Build #3683 - Engineering Case #497117)================
Applications could have crashed after specific sequences of PREPAREs and
EXECUTEs, or OPENs, if the option max_statement_count was increased from
its default value. In particular, for the crash to have occurred a connection
must have done at least 50 PREPAREs, then at least 500 EXECUTEs or OPENs,
then have had at least 50 statements concurrently prepared or opened. This
has been fixed.
================(Build #3677 - Engineering Case #497256)================
When calling the ODBC function SQLGetInfo(), the driver will return an indication
that conversion from SQL_VARCHAR to SQL_WVARCHAR is possible using the ODBC
CONVERT function.
For example:
rc = SQLGetInfo(hdbc, SQL_CONVERT_VARCHAR, (SQLPOINTER)&bitmask, sizeof(bitmask),
NULL);
This will return with the SQL_CVT_WVARCHAR bit set in bitmask which indicates
that the conversion is possible. However, attempting to do the conversion
will return an error. This is illustrated by the preparing the following
statement for execution:
SELECT {fn CONVERT(Surname,SQL_WVARCHAR)} FROM Employees
The error is "[42000][Sybase][ODBC Driver]Syntax error or access violation".
This problem has been fixed. Support has been added for conversion to types
SQL_WCHAR, SQL_WLONGVARCHAR, SQL_WVARCHAR, and SQL_GUID.
================(Build #3670 - Engineering Case #496435)================
An application using the iAnywhere JDBC driver on Unix systems, could have
crashed when making a connection. This problem has now been fixed.
================(Build #3662 - Engineering Case #492780)================
When an ODBC datasource was modified on Windows, the permission settings
on the registry key were modified. This has been corrected.
================(Build #3656 - Engineering Case #493330)================
When running Visual FoxPro the behaviour of the ODBC driver was different
between version 9.0 and version 10.0. Version 10.0 reported different error
messages when closing connections. This has been fixed so the behaviour
is now consistant between versions.
================(Build #3641 - Engineering Case #490036)================
When making continuous ODBC connections and disconnections using SQLConnect
and SQLDisconnect, a memory leak would have occurred in the application.
The process heap would have continued to grow as the application looped.
To reproduce the memory leak, the application must have allocated and freed
environment and connection handles around the SQLConnect and SQLDisconnect
calls, to ensure that the SQL Anywhere ODBC driver was loaded and unloaded
from memory. This problem has been fixed.
================(Build #3631 - Engineering Case #489435)================
Attempting to execute a "MESSAGE ... TO CLIENT" statement using
the Interactive SQL utility (dbisql) on Unix platforms, would very likely
have caused it to hang. This problem has now been fixed.
================(Build #3628 - Engineering Case #488520)================
On Unix systems, 64-bit ODBC applications required LONG_IS_64BITS to be defined
at compilation time. Failure to do this would most likely have resulted in
a crash in the application. This has been fixed
================(Build #3614 - Engineering Case #487008)================
The changes made for Engineering case 484553 incorrectly had the PWD value
replaced with all asterisks "*" in the OutConnectionString parameter
of the SQLDriverConnect() function. This has been corrected.
================(Build #3592 - Engineering Case #484003)================
If a proxy table to a table on a Microsoft SQL Server remote server had a
a UUID column, attempting to insert a value generated by newid() into that
column would have failed with a syntax error. This problem has now been fixed.
================(Build #3582 - Engineering Case #483213)================
The ODBC driver was describing NCHAR and NVARCHAR columns as SQL_WCHAR or
SQL_WVARCHAR, with the SQL_DESC_OCTET_LENGTH specified as too small, if the
column contained surrogate pairs. Depending on the application, this could
have resulted in fetched NCHAR data being truncated. This has been fixed
so that NCHAR columns now have a SQL_DESC_OCTET_LENGTH which allows for surrogate
pairs.
Note, this problem also affected the iAnywhere JDBC driver, which is used
by the Interactive SQL utility dbisql.
================(Build #3582 - Engineering Case #482773)================
The ODBC driver could have caused a segmentation fault when used on Unix
systems with some ODBC Driver Managers (for example, unixODBC) if the DSN
existed but the connection failed (for example, it failed to autostart a
server). This has been fixed.
================(Build #3533 - Engineering Case #475624)================
When using a multi-threaded ODBC application with distributed transactions,
attempting to enlist a DTC transactions could have failed in some cases.
This problem has now been fixed.
================(Build #3525 - Engineering Case #473833)================
If an ODBC application bound a procedure's INOUT parameter as input-only,
a communication error may have occurred. Calling SQLBindParameter( ...,
SQL_PARAM_INPUT, SQL_C_WCHAR,... ) would bind an input-only parameter. This
has been fixed.
================(Build #3516 - Engineering Case #473206)================
On UNIX platforms, linking applications directly against the iAnywhere ODBC
driver without using a driver manager is supported. When an application was
linked in this way, the FileDSN connection parameter would have been ignored
by the ODBC driver. This has been fixed.
As a workaround, a driver manager such, as unixODBC, can be installed, or
the ODBCINI environment variable can be set to point to the DSN file in question.
The second work around would require the FileDSN parameter to be changed
to DSN.
================(Build #3509 - Engineering Case #472245)================
A call to SQLCancel() by an ODBC application may have failed, if the attempt
was to cancel a multi-row insert. The results may have varied, but it the
application could have appeared to be hung. The ODBC driver attempts to execute
multi-row inserts as a single operation. If the operation failed, it attempted
to insert rows one at a time. In the case where the multi-row insert failed
due to the operation being cancelled, the driver was left in a bad state.
Now, the driver does not bother trying again if the original statement failed
because it was cancelled.
================(Build #3500 - Engineering Case #470296)================
The ODBC driver could have allocated a large amount of memory when using
a shared memory connection. This would have been very rare, and only occurred
with large responses. This h has been fixed.
Note, this problem could also have occurred with embedded SQL applications,
and is now fixed as well.
================(Build #3488 - Engineering Case #467522)================
When making a remote procedure call to a stored procedure in a Microsoft
SQL Server database, if one of the arguments to the stored procedure was
a string argument of type Input with a value of NULL, then the RPC would
have failed with an "Invalid precision value' error. This problem has
now been fixed.
================(Build #3470 - Engineering Case #462915)================
The ODBC PrefetchOnOpen optimization was disabled for queries which had input
parameters. The PrefetchOnOpen optimization is disabled by default, and
is enabled with the PrefetchOnOpen ODBC connection parameter. This has been
changed so that the optimization is now enabled for queries which have input
parameters if the ODBC PrefetchOnOpen connection parameter is used.
================(Build #4201 - Engineering Case #662896)================
When using the OLEDB provider, if a statement was prepared, executed, and
then the ADO MoveFirst() (OLE DB RestartPosition() ) method was called when
the cursor type was forward-only, the statement would have become unprepared.
Subsequent attempts to execute the prepared statement would then have failed.
This problem has been corrected.
================(Build #4185 - Engineering Case #658887)================
The OLE DB DBSCHEMA_FOREIGN_KEYS rowset returned the following indicated
integer values for the UPDATE_RULE or DELETE_RULE referential actions.
CASCADE 0
if referential action is NULL 1
SET NULL 2
SET DEFAULT and RESTRICT 3
These values, derived from the ODBC SQLForeignKeys result set, did not match
the following OLEDB constants defined for these referential actions.
DBUPDELRULE_NOACTION = 0x0
DBUPDELRULE_CASCADE = 0x1
DBUPDELRULE_SETNULL = 0x2
DBUPDELRULE_SETDEFAULT = 0x3
Furthermore, the DBSCHEMA_FOREIGN_KEYS rowset should have returned strings
rather than integers for the UPDATE_RULE or DELETE_RULE referential actions.
The OLE DB specification states:
If a rule was specified, the UPDATE_RULE or DELETE_RULE value is one of
the following:
"CASCADE" — A <referential action> of CASCADE was specified.
"SET NULL" — A <referential action> of SET NULL was specified.
"SET DEFAULT" — A <referential action> of SET DEFAULT was
specified.
"NO ACTION" — A <referential action> of NO ACTION was specified.
Providers should return NULL only if they cannot determine the UPDATE_RULE
or DELETE_RULE. In most cases, this implies a default of NO ACTION.
Also, the fix for Engineering case 620136 did not handle the situation when
there was no declared Primary Key in the primary table but there were table
or column (not nullable) Unique Constraints that permitted the addition of
foreign keys.
These problems have been fixed. The Upgrade utility should be used to update
the OLE DB schema rowset support in any database used with ADO, ADOX or OLE
DB.
================(Build #4163 - Engineering Case #651383)================
As of the changes for Engineering case 633120, a SQL query producing multiple
result sets was not handled correctly by the SQL Anywhere OLE DB provider.
This problem has now been corrected. The cursor is no longer closed after
the first result set is processed.
================(Build #4139 - Engineering Case #645953)================
The DBSCHEMA_PROVIDER_TYPES rowset schema incorrectly returned 0x in the
LITERAL_PREFIX column for varbit types. The apostrophe character (') is now
returned in the LITERAL_PREFIX and LITERAL_SUFFIX columns instead.
================(Build #4127 - Engineering Case #642865)================
The SQL Anywhere OLE DB provider was ignoring the Location and Initial Catalog
connection parameters. This problem has been fixed.
"Location" can now be used to specify the host name and port of
the database server. The form is hostname:port (e.g., username-pc:3628).
"Location" is mapped to the SQL Anywhere "Host" connection
parameter for version 12 or later and the "CommLinks" connection
parameter for version 11 or earlier.
"Initial Catalog" can now be used to specify the database to connect
to when more than one database has been started by a SQL Anywhere database
server. "Initial Catalog" is mapped to the SQL Anywhere "DatabaseName"
(DBN) connection parameter.
================(Build #4126 - Engineering Case #642980)================
A number of corrections and improvements have been made to the SQL Anywhere
OLE DB schema rowset support procedures:
- If a catalog name is specified as one of the schema restrictions, the
procedure will make sure it matches the current catalog. If it does not,
a single row will be returned with NULLs.
- Any rowset that can return a catalog name in a column will now return
the current database name in that column instead of NULL.
- The rows returned in the DBSCHEMA_PROVIDER_TYPES rowset have been slightly
reordered for better results with Microsoft tools. This was done since Microsoft
tools ignore the BEST_MATCH column and use the first row that matches the
datatype it is searching for.
- In the DBSCHEMA_PROVIDER_TYPES schema, the XML datatype will now set
the DATA_TYPE column to 141 (DBTYPE_XML), the IS_LONG column to 1 and return
2GB instead of 32767 for COLUMN_SIZE.
- In the DBSCHEMA_PROVIDER_TYPES schema, the TIMESTAMP WITH TIME ZONE datatype
will now set the DATA_TYPE column to 146 (DBTYPE_DBTIMESTAMPOFFSET). This
is supported in version 12 or later of SQL Anywhere.
- An entry for the REAL datatype was missing from the DBSCHEMA_PROVIDER_TYPES
rowset. This row has been added.
To install these updates into a database, the Upgrade utility (dbupgrad),
or the ALTER DATABASE UPGRADE statement can be used.
================(Build #4121 - Engineering Case #641092)================
The changes for Engineering case 633120, introduced a problem with returning
a character string column value when the length of the column value was not
bound by the consumer (i.e., the consumer does not provide a pointer to a
length field). In this special case, the returned string value should be
null-terminated. This has been fixed.
================(Build #4117 - Engineering Case #639712)================
The OLE DB provider's TABLES, TABLES_INFO, and VIEWS schema rowset support
procedures do not identify MATERIALIZED VIEWs and TEXT CONFIGURATION objects
correctly. Materialized views are now reported as "VIEWS", and
text configuration objects are now identified as "TEXT", in the
TABLE_TYPE column.
Also, the OLE DB provider's TABLES schema rowset included an unnecessary
and undocumented column "PREFIXSYSOWNED". This column has been
removed from the rowset to match similar behavior of the stored procedure
than produces theTABLES_INFO schema rowset.
The OLE DB provider's TABLE_CONSTRAINTS schema rowset support procedure
fails with the error "Column 'check' not found". This has been
fixed.
The Upgrade utility (dbupgrad) should be used to update the OLE DB schema
rowset support in any database used with ADO, ADOX or OLE DB.
================(Build #4111 - Engineering Case #638848)================
A few improvements have been made to ADOX/OLEDB table creation.
Long types are now mapped to SQL Anywhere long types. For example, an adLongVarChar
column is now mapped to "LONG VARCHAR" instead of "CHAR(0)".
Wide types are now mapped to SQL Anywhere nchar types instead of char types.
For example, adWChar is now mapped to "NCHAR" and adLongVarWChar
is now mapped to "LONG NVARCHAR".
An adSingle column with no specifed precision will now default to REAL rather
than FLOAT(0), which generated a syntax error.
An adDecimal column with no specified precision and scale will now default
to DECIMAL rather than DECIMAL(0,0), which generated a syntax error.
An adNumeric column with no specified precision and scale will now default
to NUMERIC rather than NUMERIC(0,0), which generated a syntax error.
An adLongVarBinary column will map to the IMAGE type rather than BINARY(0),
which generated a syntax error.
An adCurrency column is now supported and will map to a column of type MONEY.
An adDate column is now supported and will map to a column of type DATETIME.
If a table or column name is not defined, OLEDB will no longer fault with
a NULL pointer reference. Instead, the name "undefined" will be
used.
The following code fragment is a VBScript example for creating a table using
ADOX statements.
Set db = CreateObject( "ADOX.Catalog" )
Set ntable = CreateObject( "ADOX.Table" )
ntable.Name = "testTable"
ntable.Columns.Append "Col_1", adNumeric
ntable.Columns.Append "Col_2", adDate
ntable.Columns.Append "Col_3", adChar, 32
ntable.Columns.Append "Col_4", adVarChar, 32767
ntable.Columns.Append "Col_5", adLongVarChar
ntable.Columns.Append "Col_6", adLongVarWChar
db.Tables.Append ntable
================(Build #4096 - Engineering Case #634664)================
The Microsoft SQL Server Reporting Services 2008 application uses the Linked
Server mechanism to communicate via OLE DB to a SQL Anywhere server. It can
send EXEC statements of the following form to the SQL Anywhere OLE DB provider:
EXEC owner.procedure_name :parm1, :parm2, ...
where :parm1, etc. are bound parameters.
The SQL Anywhere OLE DB provider has been improved to now handle this syntax.
================(Build #4088 - Engineering Case #633125)================
Improvements to the DBSCHEMA_PROVIDER_TYPES rowset have been made to make
it more consistent with Microsoft SQL Server.
================(Build #4088 - Engineering Case #633120)================
Microsoft's SQL Server 2005/2008 Replication software allocates a 0x200 byte
buffer for the TYPE_NAME column of the DBSCHEMA_PROVIDER_TYPES rowset. It
then creates a DBBINDING structure identifying the length of the buffer as
0x300 bytes. When the SQL Anywhere OLE DB provider initializes the buffer
with nulls, a stack overrun occurs and Microsoft's Replication software faults.
As a work-around for Microsoft's bug, the SQL Anywhere OLE DB provider will
no longer initialize the consumer's buffer with nulls.
================(Build #4079 - Engineering Case #631330)================
Some inconsistencies between SQL Anywhere OLE DB column metadata information
and run-time column information caused problems with accessing tables via
the Microsoft SQL Server "Linked Server" mechanism. These problems
affected NCHAR, NVARCHAR,LONG NVARCHAR, VARBIT, LONG VARBIT, and TIMESTAMP
WITH TIME ZONE columns. The TIMESTAMP WITH TIMEZONE data type is new to version
12.0.0. These problems have been fixed. Table rows inserted using the OLE
DB OpenRowset/InsertRows methods are now done with autocommit turned off.
Once the inserts are completed, the rows are committed.
For the complete fix to this problem, use the Upgrade utility (dbupgrad)
to upgrade existing databases with fixes for the OLE DB schema rowset support
(metadata support).
================(Build #4073 - Engineering Case #629606)================
When using SQL Server Integration Services (SSIS), an attempt to migrate
tables between SQL Anywhere/Sybase IQ and SQL Server databases would have
failed. This problem has been corrected.
Note, if the Data Flow consists of more than 10 tables that are to be migrated
to SQL Server from a SQL Anywhere server, the Personal server should not
be used since each table is moved asynchronously on a separate connection
(i.e., more than 10 simultaneous connections will be made to the SQL Anywhere
server and the number of simultaneous connections is limited with the Personal
server).
================(Build #4073 - Engineering Case #627407)================
.NET applications using the SQL Anywhere OLE DB provider may have failed
with the following error for OleDbDataReader.Close():
"Reader Exception: Attempted to read or write protected memory. This
is often an indication that other memory is corrupt."
This has been fixed.
================(Build #4073 - Engineering Case #627266)================
An "Out of Memory" assertion may have been raised by the SQL Anywhere
OLE DB provider, which may have been indicative of heap corruption. This
problem may arise when binding parameters that are described as DBTYPE_IUNKNOWN
(a type used for LONG VARCHAR parameters). This problem has been fixed.
================(Build #4037 - Engineering Case #620289)================
If the length for a data column was described as SQL_NTS (-3), the SQL Anywhere
OLE DB provider would not have computed the correct length for the column
on 64-bit platforms. This problem has been fixed.
Note, this problem was seen with SQL Server Integration Services applications
on 64-bit Windows platforms, it does not appear in ADO applications.
================(Build #4037 - Engineering Case #620287)================
The SQL Anywhere OLE DB provider would have leaked memory if the InsertRow
method was called with no row handle return pointer.
For example,
HRESULT hr = rowset::InsertRow( hChapter, hAccessor, pData, NULL
);
would have resulted in a memory leak, since phRow (the 4th argument) is
NULL. This problem may have occurred when using the provider with SQL Server
Integration Services (SSIS).
This problem has been fixed.
================(Build #4023 - Engineering Case #617397)================
By default, the OLE DB provider indicates, through a property settting, that
it does not support returning multiple result sets, although the provider
is capable of doing so. An undocumented connection option "ASA Multiple
Results=TRUE" will enable the returning of multiple result sets. The
provider has been changed so that returning of multiple results sets is now
supported by default. More specifically, DBPROP_MULTIPLERESULTS is now set
to DBPROPVAL_MR_SUPPORTED by default. If desired, the connection option "ASA
Multiple Results=FALSE" can be used to change the property value to
DBPROPVAL_MR_NOTSUPPORTED. However, there is no known benefit to using this
option.
================(Build #4002 - Engineering Case #611017)================
When the OLE DB provider's GetNextRows method was called, the next row would
not have been read if the previous row had NULL column values. This problem
was introduced by the changes for Engineering case 605058, and has now been
fixed.
================(Build #3987 - Engineering Case #606642)================
If a division by zero error occurred in a result set, the SQL Anywhere OLE
DB provider would have returned DB_S_ENDOFROWSET instead of 0x800a000b (divide
by zero). For example, the following SELECT statement will result in a division
by zero error if the column value for "num" is 3:
SELECT num/(num-3) FROM divisionby0
This problem has been fixed. ADO will now set the correct error number (11)
and description (Division by zero) from the error code returned by OLE DB.
================(Build #3986 - Engineering Case #606229)================
When using the SQL Anywhere OLE DB provider, a memory leak cpuld have occurred
when fetching data LONG VARCHAR or long VARBINARY columns. This problem has
been fixed.
================(Build #3985 - Engineering Case #605803)================
When uploading data into a SQL Anywhere database from Microsoft SQL Server
using the Linked Server mechanism, SQL Server could have reported that it
had received inconsistent metadata information and failed the upload. This
was due to the SQL Anywhere OLE DB provider returning inconsistent column
lengths for VARCHAR and NVARCHAR columns when using the UTF8 character set
collation. For example, an NVARCHAR(100) column length would have been reported
as 400, which is the octet length for this column using the UTF8 collation,
but the "ulColumnSize" field of the DBCOLUMNINFO structure should
contain the maximum length in characters for DBTYPE_STR and DBTYPE_WSTR columns,
not the maximum length in bytes. This problem has been corrected.
================(Build #3985 - Engineering Case #605058)================
If a client-side cursor UPDATE was performed using the SQL Anywhere OLE DB
provider, and the column was of type VARCHAR(n), where n was greater than
or equal to 256 and the column value was originally NULL, then an error message
similar to the following would have been issued by ADO:
Row cannot be located for updating. Some values may have been changed
since it was last read.
The OLE DB provider was failing to return DBSTATUS_S_ISNULL for the column
value and returned an empty string instead. This caused ADO to generate an
UPDATE statement with a WHERE clause expression of the form "column
= ?" and a bound value of '' (a zero-length string). This problem has
been fixed. ADO will now generate an UPDATE statement with a WHERE clause
expression of the form "column IS NULL".
A workaround is to use a server-side cursor.
================(Build #3963 - Engineering Case #590210)================
The SQL Anywhere OLE DB provider implementation of IMultipleResults::GetResult()
returned an incorrect 'rows affected' count for inserts, updates, and deletes.
In this situation, the result returned by OleDbCommand.ExecuteNonQuery(),
which called the GetResult() method, was -1. This problem has now been fixed
to return the correct 'rows affected' count.
================(Build #3914 - Engineering Case #575490)================
Wide-fetching a query that referenced a proxy table could have returned the
last row multiple times, instead of returning no rows and the SQLE_NOTFOUND
warning. This has been fixed.
================(Build #3902 - Engineering Case #574697)================
If a NUMERIC INOUT parameter was bound as SQL_PARAM_INPUT_OUTPUT, the result
that was returned to the caller was always 0.
For example:
CREATE PROCEDURE _TEST_PROC
(
@IN_VAL1 NUMERIC(7, 0),
@IN_VAL2 NUMERIC(7, 0),
@OUT_VAL NUMERIC(7, 0) OUTPUT
)
AS
BEGIN
SET @OUT_VAL = @IN_VAL1 + @IN_VAL2
END
If the statement "CALL PROCEDURE _TEST_PROC( 100, 200, ? )" was
prepared, and the third parameter was bound as SQL_PARAM_INPUT_OUTPUT, the
result after execution was 0. It should have been 300. If the parameter was
bound as SQL_PARAM_OUTPUT, the result returned was correct.This problem has
been fixed.
Note that in the above Transact SQL procedure, OUT_VAL is an INOUT parameter,
since Transact SQL parameters are always INPUT and the addition of the OUTPUT
clause makes them INOUT.
================(Build #3861 - Engineering Case #564435)================
In an ADO/OLE DB application, when a UNIQUEIDENTIFIER (or GUID) was used
as a parameter in a query, an error message like "Cannot convert 0x
to a uniqueidentifier" may have resulted, or the query may simply have
failed to return any results. This problem has been fixed.
Sample schema:
create table uuidtable( pkey int, uuid uniqueidentifier, stringform uniqueidentifierstr
);
Sample query:
select * from uuidtable where uuid = ?
The following .NET/OLE DB example shows typical uniqueidentifier parameter
binding:
OleDbCommand cmd = new OleDbCommand(txtSQLStatement.Text.Trim(), _conn);
OleDbParameter param1 = new OleDbParameter();
param1.ParameterName = "@p1";
param1.DbType = System.Data.DbType.Guid;
cmd.Parameters.Add(param1);
cmd.Parameters[0].Value = new Guid("41dfe9f9-db91-11d2-8c43-006008d26a6f");
================(Build #3827 - Engineering Case #554998)================
When ADO asks for the schema information related to a query (e.g., SELECT
* FROM Products), it requests a number of attributes like "IDNAME",
"TABLENAME", etc. The SQL Anywhere OLE DB provider returns a DBID
for schema rowset columns that matches Microsoft's declared DBID. For example,
the schema column DBCOLUMN_IDNAME is defined in Microsoft's OLEDB.H header
file as follows:
extern const OLEDBDECLSPEC DBID DBCOLUMN_IDNAME = {DBCIDGUID, DBKIND_GUID_PROPID,
(LPOLESTR)2};
This is what the OLE DB provider would return as the DBID for the "IDNAME"
column. This strategy works for many ADO methods that request schema information.
However, the following example illustrates a problem with the ADO RecordSet
Save() method.
Dim strConnection = "PROVIDER=SAOLEDB;ServerName=demo;DatabaseName=demo;USERID=DBA;PASSWORD=sql"
Dim strSQLStatement = "SELECT ID, Name FROM Products"
Dim strXMLLocation = "c:\\temp\\products.xml"
Dim objADOConnection As New ADODB.Connection
Dim objADORecordSet As New ADODB.Recordset
objADOConnection.Open(strConnection)
objADORecordSet.Open(strSQLStatement, objADOConnection, ADODB.CursorTypeEnum.adOpenStatic,
ADODB.LockTypeEnum.adLockOptimistic)
If Not (objADORecordSet.EOF And objADORecordSet.BOF) Then
objADORecordSet.MoveFirst()
objADORecordSet.Save(strXMLLocation, ADODB.PersistFormatEnum.adPersistXML)
End If
For reasons unknown, the ADO RecordSet Save() method returns the error "catastrophic
failure".
The SQL Anywhere OLE DB provider has been changed to return only the "property
ID" part of the DBID. This is equivalent to returning the following
structure.
extern const OLEDBDECLSPEC DBID DBCOLUMN_IDNAME = {NULL,
DBKIND_PROPID, (LPOLESTR)2};
This permits the ADO RecordSet Save() method to complete successfully.
================(Build #3818 - Engineering Case #552739)================
For an ADO/OLE DB application, if the CursorLocation was adUseClient and
the size of the query was larger than 4K characters, then the SQL Anywhere
OLE DB provider would have crashed. Also, if the client query contained single
quotes (apostrophes), then the query metadata would not have been obtained.
Both of these problems have now been fixed.
================(Build #3797 - Engineering Case #543695)================
Output parameters for stored procedure calls that were marked as indirect
(DBTYPE_BYREF) were not handled properly by the SQL Anywhere OLE DB provider.
This problem has been corrected.
================(Build #3766 - Engineering Case #544214)================
The changes for Engineering case 535861 caused the OLE DB schema support
stored procedures to not be installed in newly created databases. This problem
has now been corrected.
As a work-around, databases can be upgraded using dbupgrad.exe or by executing
an ALTER DATABASE UPGRADE PROCEDURE ON statement.
================(Build #3744 - Engineering Case #540698)================
Calling the OleDbDataReader GetString() method may have failed if the source
string had a length of 0, or it may have returned a string that was missing
the trailing null termination character when the source string length was
greater than or equal to 1 (i.e., "abcde" comes back as "abcd").
This problem has been fixed.
================(Build #3736 - Engineering Case #534792)================
The OLE DB provider did not correctly support the DBCOLUMN_BASECOLUMNNAME
rowset column of the IColumnsRowset::GetColumnsRowset method. This column
should contain the name of the column in the data store, which might be different
than the column name returned in the DBCOLUMN_NAME column if an alias or
a view was used. Here is an example.
CREATE TABLE GROUPO.MyTable(
DATA2 varchar(16),
DATA1 varchar(16),
PKEY int NOT NULL default autoincrement,
CONSTRAINT PKeyConstraint PRIMARY KEY (PKEY)
) ;
CREATE VIEW DBA.MyView( PKEY, DATA_1, DATA2)
AS SELECT PKEY, DATA1, DATA2 FROM MyTable;
Consider the following queries.
SELECT PKEY, DATA_1, DATA2 as D2 FROM MyView
SELECT PKEY, DATA1 as DATA_1, DATA2 as D2 FROM MyTable
In both cases, the OLE BD provider would return the following for DBCOLUMN_BASECOLUMNNAME
and DBCOLUMN_BASETABLENAME for these queries.
PKEY MyTable
DATA_1 MyTable
D2 MyTable
Of course, the DATA_1 and D2 columns are not found in MyTable.
With this fix, the provider now returns the correct column names.
PKEY MyTable
DATA1 MyTable
DATA2 MyTable
================(Build #3734 - Engineering Case #539056)================
For a statement of the form "EXEC <linked_server_name>..dba.myproc",
Microsoft SQL Server 2005 passes a statement of the form {?=call "dba"."myproc"
} to the SQL Anywhere OLE DB provider. It passes in a single integer parameter
for binding with a status of DB_E_UNAVAILABLE. The SQL Anywhere OLE DB provider
had always checked the status of parameters and accepted one of DBSTATUS_S_DEFAULT,
DBSTATUS_S_ISNULL, or DBSTATUS_S_OK. Any other status was flagged with an
error. As such, the above example would have failed with an error. Since
the parameter is OUTPUT-only, the status of the parameter can be ignored,
as the status for any OUTPUT parameters will be set after the statement has
been executed and any OUTPUT parameters will be filled in. The OLE DB provider
behaviour has been changed to ignore the incoming status of OUTPUT-only parameters.
This allows the EXEC statement to execute successfully.
================(Build #3729 - Engineering Case #537804)================
When the size of a long varchar or long binary column exceeded 32 KB on Windows,
or 1 KB on Windows Mobile, the column may not have been read correctly by
the OLE DB provider. This problem has been fixed.
================(Build #3724 - Engineering Case #536620)================
The OLE DB provider's ICommandWithParameters::SetParameterInfo() method may
have caused an access violation, depending on the order of parameter indexes
in rgParamOrdinals, which is one of the input parameters to the method. The
problem may have also occurred if SetParameterInfo() was called after ICommandPrepare::Prepare().
This problem has now been fixed.
================(Build #3723 - Engineering Case #530923)================
Occasionaly, a DB_E_BADACCESSORHANDLE error would have been returned by the
OLE DB IMultipleResults::GetResult method. This error could have occurred
if the DBPARAMS structure that was passed to the ICommand::Execute method
was disposed by the client before all the result rowsets from a stored procedure
call were returned by calls to the GetResult method. On the final call to
GetResult, output parameters may have become available. As a result, the
DBPARAMS structure was required to be intact for each call to GetResult.
This problem has been fixed. When Execute is called, if it is determined
that there are no output parameters, then the DBPARAMS structure will be
ignored on subsequent calls to GetResult.
================(Build #3721 - Engineering Case #535861)================
Updates to the OLE DB schema support procedures were not installed into the
database using the Upgrade utility (dbupgrad) or when executing an ALTER
DATABASE UPGRADE statement. They were installed though when the PROCEDURE
ON clause was used with ALTER DATABASE UPGRADE. To ensure that dbupgrad will
perform the OLE DB update, the ALTER DATABASE UPGRADE support procedures
will now update and/or install the latest OLE DB schema support procedures.
Since PROCEDURE ON is no longer required for the OLE DB update, you are no
longer forced to update other system procedures.
================(Build #3696 - Engineering Case #530005)================
The provider's implementation of the GetParameterInfo() method did not indicate
whether parameters in a command were input (DBPARAMFLAGS_ISINPUT), output
(DBPARAMFLAGS_ISOUTPUT), or both.
Examples of such commands are:
INSERT INTO TestTable (ID,Name) VALUES(?,?)
?=CALL TestProcedure(?,?)
This problem has been fixed. The provider now returns the correct settings.
A work around is to use the SetParameterInfo() method to set the DBPARAMFLAGS_ISINPUT
or DBPARAMFLAGS_ISOUTPUT flags in the DBPARAMBINDINFO structure.
================(Build #3691 - Engineering Case #500661)================
When the database option "Blocking" was set to "Off",
an attempt to read rows using the SQL Anywhere OLEDB provider, from a table
that has some or all of its rows locked, would have resulted in the error
DB_S_ENDOFROWSET being returned, which means "no more rows". The
error that should be returned was DB_E_TABLEINUSE ("The specified table
was in use"). This problem has been fixed. Now, when ExecuteReader is
called, the error "User 'DBA' has the row in 'Customers' locked"
is reported.
The following .NET example can be used to illustrate the problem. Suppose
another connection is inserting rows into the Customers table. Then the following
example should result in an error when "blocking" is "off".
OleDbConnection conn2 = new OleDbConnection("Provider=SAOLEDB;DSN=SQL
Anywhere 10 Demo");
conn2.Open();
OleDbTransaction trans2 = conn2.BeginTransaction(IsolationLevel.ReadCommitted);
OleDbCommand cmd2 = new OleDbCommand("SELECT FIRST Surname FROM Customers",
conn2, trans2);
OleDbDataReader reader2 = cmd2.ExecuteReader();
while (reader2.Read())
{
String s = reader2.GetString(0);
MessageBox.Show(s);
}
================(Build #3678 - Engineering Case #497932)================
The 64-bit version of the OLE DB provider could have caused a page fault
and terminated. This problem has been fixed.
================(Build #3677 - Engineering Case #497937)================
The following problems with the OLE DB provider have been corrected:
- conversion of multibyte strings to wide character strings (DBTYPE_WSTR)
was not being done correctly for non-UTF8 character sets.
- DBTYPE_BYREF parameters were not supported.
- the provider would have crashed in the OLE DB Execute method when prepared
statements did not have their parameter information set using the OLE DB
SetParameterInfo method.
================(Build #3659 - Engineering Case #494462)================
When attempting to call a stored procedure in an ADO application, the OLE
DB provider could have returned an "invalid parameter type" error.
The order of the parameters in the procedure was not determined correctly
by the provider. This has been corrected.
================(Build #3583 - Engineering Case #482839)================
When Microsoft's Business Intelligence Development Studio attempted to insert
string values into a row using the OLEDB InsertRow() method, it passed in
a pointer to the data to be inserted. For string values (DBTYPE_STR), it
sometimes did not pass in a length, which caused the SQL Anywhere provider
to insert a string of length zero into the corresponding column. This behavior
has been changed. Now, for types DBTYPE_STR and DBTYPE_WSTR, the provider
will attempt to determine the string's true length when no length is passed
in, with the assumption being that the string is null-terminated.
================(Build #3568 - Engineering Case #481000)================
Borland Delphi converts an empty string parameter to a VARIANT type wide
string with a NULL data pointer. When Borland Delphi was used with the SQL
Anywhere OLE DB provider, this would have resulted in the application crashing.
The following code fragment illustrates the situation.
ADOQuery1.SQL[0] := 'select * from Customers where GivenName = :fname';
ADOQuery1.Parameters.ParamByName('fname').Value := '';
ADOQuery1.ExecSQL;
This problem has been fixed. A null pointer passed to the provider for a
parameter will now be treated as an empty string when passed to the database
server.
================(Build #3547 - Engineering Case #476516)================
When using the SQL Anywhere OLE DB provider, preparing a statement that called
a user-defined function would have resulted in the function's result value
being truncated to 0 bytes. This problem has been corrected.
The following is an example:
CREATE FUNCTION foo( IN arg VARCHAR(3) )
RETURNS VARCHAR(10)
DETERMINISTIC
BEGIN
DECLARE retVal VARCHAR(10);
SET retVal = 'RETURNVALU';
RETURN retVal;
END
Dim cmd As New ADODB.Command
cmd.CommandText = "foo"
cmd.CommandType = ADODB.CommandTypeEnum.adCmdStoredProc
cmd.let_ActiveConnection(con)
cmd.Prepared = True
cmd.Parameters.Refresh()
cmd.Parameters(1).Value = "abc"
cmd.Execute()
MsgBox(cmd.Parameters(0).Value)
A work-around is to not prepare the command (ie. cmd.Prepared = False).
================(Build #3537 - Engineering Case #474282)================
When a DataGrid object was filled with columns that contained more than 32K
bytes, an error could have been returned by the SQL Anywhere OLE DB provider.
This problem has been fixed.
================(Build #3527 - Engineering Case #474606)================
When using the OLE DB provider, a crash could have occurred when disconnecting
from a datasource. This problem has been fixed.
================(Build #3490 - Engineering Case #467507)================
When using SQL Server Business Intelligence Development Studio, the Data
Flow Task "Preview" and column list functions would have failed
when using the SQL Anywhere OLEDB Provider to connect to a SQL Anywhere server.
This problem has been fixed.
================(Build #3486 - Engineering Case #467276)================
The SQL Anywhere OLE DB provider may have failed an assertion if called by
ADO with an incorrect set of parameters to Rowset::ReleaseRows. This problem
only exists in the 64-bit version of MSADO15.DLL for Windows Vista. It does
not exist in the 32-bit version of Vista, nor does it exist in the 64-bit
version of Windows 2003. It occurred when ADO calls the SQL Anywhere OLE
DB provider to release rows in a rowset that it has not previously fetched
rows from. The symptoms include a request to release a rowset with a single
row and a pointer to a row handle that is invalid.
A work around has been added to the SQL Anywhere OLE DB provider such that
a request to release a rowset when no rowset exists will be ignored. The
following VBScript sample will fail on 64-bit Windows Vista without the provider
workaround:
query = "SELECT * FROM Employees"
Set recordset = connection.Execute(query)
For Each field in recordset.Fields
WScript.Echo field.Name
propCount = 0
For Each prop in field.Properties 'crashes on 64-bit Vista
...
Next
Next
================(Build #4206 - Engineering Case #665004)================
When trying to connect using the Ruby DBI interface, the driver did not raise
an error if the username/password was invalid. Instead it silently failed.
This has been fixed.
================(Build #4181 - Engineering Case #657748)================
When using the Microsoft ODBC Data Source Administrator, an attempt to create
a DSN that used a FIPS Certificate may have resulted in a crash. The has
been fixed.
================(Build #4146 - Engineering Case #647854)================
When running the fetchtst tool for testing the performance of queries (in
samples-dir\SQLAnywhere\PerformanceFetch) on a SQL statement larger than
10K, fetchtst may have crashed. This has now been fixed.
================(Build #4129 - Engineering Case #641702)================
The MSI install built using the Deployment wizard did not include the Charsets
directory. This is used by the Unload Support feature, and has now been added.
================(Build #4128 - Engineering Case #642996)================
If a database encrypted with AES_FIPS or AES256_FIPS was copied to a CE device,
the server would have been unable to start it. This has been fixed.
================(Build #4121 - Engineering Case #640110)================
When executing a remote procedure call to an ASE server, if the procedure
involved output parameters, then there was a chance the call would have failed
with the remote error "output parameters will only be returned from
this stored procedure when you use only parameter markers to pass parameter
values". This problem has now been fixed and the remote call should
now execute correctly.
================(Build #4080 - Engineering Case #630338)================
If the option row_counts was set to 'On', the system procedures sa_performance_statistics
and sa_performance_diagnostics did not return a result set, and the procedure
sa_describe_query caused an assertion failed 109512 error. These problems
have been fixed.
================(Build #4065 - Engineering Case #627788)================
Selecting the database property "name" for viewing would have prevented
the list of database properties from refreshing. This has been fixed.
================(Build #4057 - Engineering Case #624971)================
The database server could have crashed when recovering a database with multiple
dbspaces. This has been fixed.
================(Build #4030 - Engineering Case #615473)================
When installing the Japanese 10.0.1 3990 EBF, some of the characters in various
buttons of the installer containing an unintelligible sequence of characters
(mojibake). The character encoding used when compiling the installer was
incorrect, this has been fixed.
Note, the installer for the 10.0.1 3976 EBF worked correctly.
================(Build #4029 - Engineering Case #616656)================
When using the SQL Anywhere PHP module, and binding a null numeric value
to a statement with sasql_stmt_bind_param_ex, the null value would have been
converted to a 0. This resulted in a 0 being passed in the statement instead
of the desired null value. Resetting the variable to null after binding would
have given the desired behavior. This has now been fixed.
================(Build #4026 - Engineering Case #618014)================
The TableViewer ADO.Net sample application queries SYSTABLE to determine
the table list to display. That query would have incorrectly returned text
index tables, if defined, which cannot be directly manipulated. If a query
was subsequently issued using a text index table, the error SQLE_TEXT_CANNOT_USE_TEXT_INDEX
would have occurred. The query has now been rewritten to only display base
tables.
Additional changes to the application were also made to improve usability.
First, functionality that required an active database connection such as
the Execute button are disabled if there is no active connection. Second,
a simple SELECT statement is generated based on the table selected from the
table list control.
================(Build #4013 - Engineering Case #610982)================
When installing SQL Anywhere on a Windows system that used a multibyte character
set as the ANSI code page, the SQL Anywhere performance monitor counters
may not have been registered correctly and no error message would have been
displayed. At startup, the server would have displayed the message "Unable
to initialize NT performance monitor data area; server startup continuing".
This problem has now been fixed.
================(Build #4010 - Engineering Case #608743)================
When installing an MSI created by the Deployment wizard which contained Sybase
Central and one or more plugins, on Windows Vista or Windows 7, it would
have failed with the error: "The exception unknown software exception
(0xc000000d) occurred in the application at location 0x10001d3d." The
install would have completed, but the plugins were not correctly registered.
This has been fixed.
================(Build #3976 - Engineering Case #595294)================
The install or uninstall process could have left the machine.config file
in a bad state. This has been fixed.
================(Build #3911 - Engineering Case #577714)================
The ADO.Net sample program LinqSample was not working correctly. This has
now been fixed.
================(Build #3910 - Engineering Case #577334)================
When using the SQL Anwhere C API (drivers for PHP, Perl, Python, and Ruby),
applications executing RAISERROR with a user-defined error, would have seen
the correct error code returned, but the error message was returned as "Unknown
error". This has been fixed.
================(Build #3893 - Engineering Case #564857)================
In very rare circumstances, SQL Anywhere .NET provider could have crashed
the work process in IIS. This problem has been fixed
================(Build #3846 - Engineering Case #560351)================
Columns of type UNIQUEIDENTIFIER were being fetched in binary format in the
SQL Anywhere C API (php, Perl, Python and Ruby), where as they should have
been returned as strings. This has been fixed.
================(Build #3846 - Engineering Case #559864)================
Connecting, disconnecting, and reconnecting using the same connection handle
would have caused error -298 "Attempted two active database requests"
in dbcapi.
For example:
conn = sqlany_new_connection();
sqlany_connect( conn, <connection_string> );
sqlany_disconnect( conn );
sqlany_connect( conn, <connection_string> );
The last sqlany_connect() call would have returned error -298. This has
been fixed. The workaround is to call sqlany_free_connect() and allocate
and new handle.
================(Build #3835 - Engineering Case #556340)================
Applications fetching data from an NCHAR column that was greater than 32767
bytes, using the Perl, Python, PHP, or Ruby drivers, may have crashed. This
has been fixed.
================(Build #3817 - Engineering Case #553491)================
The Apache redirector module would have crashed when used with the Sun built
Apache web server that currently ships with the Solaris operating system.
This has been fixed.
================(Build #3783 - Engineering Case #547084)================
A connection attempt that resulted in a warning was treated as an error and
no connection was created. This was affecting the PHP, Python, and Ruby drivers.
This has been fixed. Warnings no longer prevent a successful connection.
The actual warning message can still be retrieved as usual.
================(Build #3756 - Engineering Case #542521)================
On Unix systems, when an application crashed, rather that aborting or exiting,
it may have gone intio a state of 100% CPU utilization. This would have occurred
when the following conditions occurred in order:
1. The application loaded one of SQL Anywhere's client libraries (JDBC driver,
ODBC
driver, DBLIB), which automatically install a signal handler.
2. The application installs its own signal handler function
3. An application fault happens which causes the application's signal handler
to call the SA signal handler.
The SA signal handler will return without causingan abort and the application
fault would have been re-triggered. The re-trigger of the signal and the
return without handling the signal generated 100% CPU utilization. This has
been fixed.
================(Build #3756 - Engineering Case #542482)================
On Mac OS X systems, if the path specified in the "Database" field
of the "New Server" dialog in DBLauncher contained spaces, the
server would have failed to start the database. This has been fixed.
================(Build #3752 - Engineering Case #541576)================
When RSA encryption was in use by the server or client on Mac OS X systems,
memory could have been leaked. This has been corrected.
================(Build #3741 - Engineering Case #540201)================
On Mac OS X systems, an application may have taken a very long time to connect
to a server if the server was found through an LDAP server and both the client
and the server are IPv6-enabled. On Mac OS X, in order to establish a connection
to a link-local IPv6
address, the scope (interface) ID must be specified. If a scope ID is not
specified, it defaults to 0. A connection attempt where the scope ID is incorrect
may take a long time to time out and fail. When a link-local IPv6 address
was registered with LDAP, the scope ID was not included in the IP address
that was registered. Doing so would not be
useful, since scope IDs for the same link can vary from machine to machine.
If an application obtained such a link-local address from LDAP and attempts
to connect to it, in effect it will attempt a connection with a scope ID
of 0. This has been fixed so that now an attempt to if the scope ID is 0
is refused on Mac OS X systems, including when the HOST connection parameter
is used in the connection string with a link-local IPv6 address with no interface
ID specified.
Note that if the wrong non-zero interface ID is specified, a connection
will still be attempted.
================(Build #3724 - Engineering Case #536568)================
Using the SQL Anywhere Support utility (dbsupport) to check for updates on
HP-UX and AIX would have failed to check for 32-bit client updates. This
has been fixed.
================(Build #3707 - Engineering Case #529075)================
When trying to use the database tools library on HP-UX (PARISC 32 or 64)
to read a database created with a version prior to 10.0, the application
would have failed with the error "could find or load the physical store
DLL". This has been fixed.
================(Build #3698 - Engineering Case #531094)================
The installer may failed on some versions of Linux. Symptoms may have ncluded
incorrect reporting of available disk space, and messages such as:
tail: `-1' option is obsolete; use `-n 1'
Try `tail --help' for more information.
An error occurred while attempting to extract files in /opt/sybase/
The way in which the head and tail utilities were being invoked by the installer
was incompatible with some older versions of the Gnu head and tail utilities
included on Linux. This has been fixed by replacing the use of head and tail
in the setup script with appropriate sed commands.
================(Build #3658 - Engineering Case #494280)================
After installing an application from a Microsoft Windows Installer package
that was created using the Deployment Wizard, if the application was created
with Visual Studio 2005 C# and used System.Data.OleDb, updating the database
would have generated an error for which there was no text. This has been
corrected.
================(Build #3656 - Engineering Case #494029)================
The -uc option (Start server in console mode) was not support by the server
when run on MacOS systems. This has now been corrected.
================(Build #3654 - Engineering Case #492387)================
The install would have failed on Unix systems which contained a version of
coreutils 6.9 or newer (such as Ubuntu Linux 8.04). The failures would likely
have occurred while checking that the target system meets the minimum requirements
for SQL Anywhere, or while verifying the amount of free disk space available.
The "setup" install script can be modified to work around this
issue as follows:
1. Find all lines containing the "cut" command that uses the "-f"
argument
2. For all such lines if a comma immediately follows the "-f",
remove this comma
For example, the line:
OS_REL_1=`echo $OS_REL | cut -d. -f,1`
should become:
OS_REL_1=`echo $OS_REL | cut -d. -f1`
================(Build #3652 - Engineering Case #492018)================
If a service that could not interact with the desktop failed to start, the
error message describing the cause of the failure would not have been logged
to the Event Log. This has been fixed.
================(Build #3634 - Engineering Case #490227)================
The SQL Preprocessor (sqlpp) could have generated incorrect code for SET
OPTION statements. Correct code was generated for single SET OPTION statements,
but incorrect code was generated if the SET OPTION was contained within a
batch, procedure definition, etc. This has been fixed.
================(Build #3634 - Engineering Case #482138)================
The Deployment Wizard would have created an install that did not register
the Dbmlsync Integration Component. This has been corrected by having dbmlsynccom.dll
and dbmlsynccomg.dll self register when installed.
================(Build #3616 - Engineering Case #487364)================
When converting a string from one character set to another, it was possible
for the translated string to have been truncated in very rare situations.
For the problem to have occurred, a conversion using ICU was required, which
typically meant that a multibyte charset other than UTF-8 was involved, which
is similar to Engineering case 484960. This problem has been fixed.
Note, this problem does not affect the database server, but does affect
other components in SQL Anywhere.
================(Build #3614 - Engineering Case #470999)================
It was not possible to build the PHP driver for SQL Anywhere 10 on Unix systems.
The config.m4 script that is used as part of the PHP build procedure has
now been updated to use version 10 software.
================(Build #3600 - Engineering Case #484960)================
When any of the components in SQL Anywhere were converting a string from
one character set to another, it was possible for the translated string to
have been truncated in rare situations. For the problem to have occured,
a multibyte character set other than UTF-8 was typically involved. The problem
has now been fixed.
================(Build #3596 - Engineering Case #477617)================
After running the Windows CE install on a Windows Vista system, immediately
choosing to deploy to a Windows CE device would have caused the deployment
install to crash with the following error:
An error (-5011: 0x80040706) has occurred while running the setup.
Please make sure you have finished any previous setup and closed other
applications.
If the error still occurs, please contact your vendor: Sybase, Inc.
Spawning the CE deployment installer at the end of the CE Desktop install
was unstable on Windows Vista. The deployment installer is no longer automatically
deployed when installing on Windows Vista, but displays a message that deployment
can be done by selecting "Deploy SQL Anywhere for Windows Mobile"
from the Start menu.
================(Build #3592 - Engineering Case #484072)================
Some UPDATE and DELETE statements with aggregate expressions were incorrectly
failing with the error "Invalid use of an aggregate function".
This has been corrected.
================(Build #3582 - Engineering Case #483072)================
The SQL Anywhere Deployment wizard would not have deployed the file mlnotif.jar
when MobiLink server was selected. This file was missing from the list of
files to deploy and has now been added.
================(Build #3581 - Engineering Case #482379)================
After applying an EBF for SQL Anywhere Windows systems with Visual Studio
2005 installed, there could have been some garbled characters left at the
beginning of the machine.config file for .NET Framework 2.0. This would have
caused the SQL Anywhere Explorer for Visual Studio 2005 to not work properly.
This has been fixed.
================(Build #3576 - Engineering Case #482137)================
The Deployment Wizard did not deploy the utility dbelevate10.exe, that is
required for running on Windows Vista. This has been corrected.
================(Build #3570 - Engineering Case #481415)================
Attempting to use the "-install" option in the Unix install would
have resulted in the failure:
Files missing or regkey invalid.
This has now been fixed.
================(Build #3530 - Engineering Case #474904)================
If two or more transactions concurrently attempted to lock a table t in exclusive
mode with "lock table t in exclusive mode", the transactions could
have deadlocked. This was much more likely to occur on multi-processor systems,
and is not likely to be reported by personal servers. This has been fixed.
================(Build #3521 - Engineering Case #473203)================
When using the iAnywhere Solutions 10 - Oracle ODBC driver to fetch data
by calling SQLGetData(), the data could have been truncated, if the column
was not a BLOB column, and the buffer size passed into SQLGetData was less
than the actual data length. This problem is now fixed.
================(Build #3514 - Engineering Case #479560)================
On RedHat Enterprise 5 systems with kernel versions prior to 2.6.21, if a
SQL Anywhere executable crashed, the crash handler would have written a crash
log, but it would have failed to write out a minicore dump. This issue has
been fixed.
================(Build #3513 - Engineering Case #472262)================
The "Start In" property of the Sybase Central shortcut (Start menu
item) was incorrectly set. This could have resulted in Sybase Central failing
to start. This has been fixed.
================(Build #3508 - Engineering Case #472928)================
AS of 10.0.1 build 3341, it was possible to use the Data Source utility (dbdsn)
to create DSNs for the iAnywhere Oracle driver DSNs". Although it was
possible to install the iAnywhere Oracle driver and not have dbdsn installed.
If for example, MobiLink was installed without SQL Anywhere, the iAnywhere
Oracle driver would have been install without dbdsn. This has been fixed.
================(Build #3507 - Engineering Case #471584)================
Erroneous results could have been obtained for a query containing a left
out join, where FOR READ ONLY was not specified, and where a temporary table
was required (for example, to order the results of the join). This has now
been corrected.
================(Build #3506 - Engineering Case #470680)================
When running the Windows Performance Monitor in a Terminal Services session
other than session 0, it was not possible to monitor database services running
in session 0. On XP, system services and the primary desktop are all in session
0. On Vista, only the system services run in session zero. The behaviour
has been changed so that when the Windows Performance Monitor (perfmon) is
started (actually, when "add counters" is selected for the first
time), perfmon will monitor a database server in the local session if one
exists and is providing statistics. If there is no server running in the
current session, perfmon will monitor a database server in session 0 if one
exists and is providing statistics. If there is no database server providing
statistics in the local session or in session 0, perfmon will display statistics
for a database server if one is subsequently started in the local session
ONLY. To monitor a database server that runs in session 0 (eg, a system service)
from a session other than session 0, the database server must be started
before perfmon is started.
================(Build #3503 - Engineering Case #470839)================
When running the Windows CE install in Maintenance mode, the Start Copy Dialog
did not contain the list of components to be installed. This has been fixed.
================(Build #3500 - Engineering Case #469977)================
If a server was started on Windows Vista and the default port number 2638
was in use, the server may give the error "Unable to initialize communications
links" and fail to start. This would only have happened if a port number
was not specified using the -x server option. The correct behaviour is to
choose a different port and start on that port. This has been fixed.
================(Build #3499 - Engineering Case #470052)================
On Japanese Windows systems, when browsing for a file in SQL Anywhere Explorer,
UltraLite udb and pdb files would not have been displayed when the file type
was "All UltraLite Database Files (.udb, .pdb)", even though they
did exist. This has been fixed.
================(Build #3474 - Engineering Case #455330)================
The Unix install could have hung after reporting the following error:
"No valid values found for the -w flag". The installer makes a
call to the OS to get the size of the current window, which is then used
to format the display of the install text. If this call returns an error,
the window size is set to a default of 80x24. If however, the call does not
return an error but returned a window size of 0x0, the installer would have
tried to use this to format the text, leading to problem. This has been fixed
by using the default 80x24 size for this case as well.
================(Build #3420 - Engineering Case #464299)================
Installing the Runtime version of SQL Anywhere, using the "Add"
option during SQL Anywhere Install Maintenance, would have uninstalled some
previously installed components. This has been fixed.
================(Build #4222 - Engineering Case #669032)================
When starting the 64-bit Linux server with the GTK GUI, and neither the server
name nor the database file is specified (e.g. using the icon), a dialog is
presented to enter server startup information. When this dialog closed, the
server may have crashed. This has now been fixed.
================(Build #4222 - Engineering Case #663056)================
In exceptional rare situations the server could have crashed or failed assertions
106808, 100913, or 111706 if very long property values are queried. This
has been fixed by truncating property values to the max varchar length of
32000 bytes.
================(Build #4218 - Engineering Case #659608)================
When making an external environment call, if the external environment procedure
made a server side request that ended up leaving a cursor on a temporary
table open, then the server could have crashed when the connection was closed.
This problem has now been fixed.
================(Build #4213 - Engineering Case #661440)================
In rare cases the server may have crashed while performing DDL and DML operations
concurrently. This has been fixed.
================(Build #4213 - Engineering Case #594916)================
In some circumstances, the server may have failed to recover a database with
assertion failure 201135 - "page freed twice". Some newly allocated
database pages were not being initialized. This has been fixed.
================(Build #4209 - Engineering Case #665799)================
On Windows systems, a minidump might not have been generated under certain
circumstances. This has been fixed.
================(Build #4209 - Engineering Case #635353)================
The server could have hung when a connection disconnected, or was dropped.
This was more likely to have occurred if the server was under heavy load.
This has been fixed.
================(Build #4188 - Engineering Case #659631)================
If an application enlisted a connection within a DTC transaction and then
subsequently attempted to perform a DTC commit on the transaction after explicitly
unenlisting it first, then the application would have hung until the server
was shut down. This problem has now been fixed and the commit request will
now immediately fail as expected.
================(Build #4185 - Engineering Case #658114)================
The server would have crashed if a SELECT statement useed the FOR XML EXPLICIT
clause, and a null value was used for the CDATA directive. This has been
fixed
================(Build #4185 - Engineering Case #657823)================
The fix for Engineering case 620136 did not handle the situation where there
was no declared Primary Key in the primary table but there were table, or
column, (not nullable) Unique Constraints that permit the addition of foreign
keys. This problem has been corrected.
================(Build #4183 - Engineering Case #658302)================
If many contiguous index entries were removed from an index with no intervening
inserts, concurrent snapshot transactions could have seen incorrect results,
and in rare circumstances, foreign rows could have been added without matching
primary rows. This has been fixed.
================(Build #4183 - Engineering Case #657987)================
The server could have crashed, or failed assertion 200114, when processing
a LIKE predicate. This has been fixed.
================(Build #4182 - Engineering Case #620095)================
Several stability problems existed with parallel queries when using low-memory
strategies, which could have lead to server hangs or crashes. These have
been fixed. A workaround for these problems is to disable parallelism by
setting the option MAX_QUERY_TASKS=1 for all affected queries.
================(Build #4180 - Engineering Case #655956)================
If a server was handling a large number of requests, and a large number of
those requests made external environment calls that resulted in server side
calls coming back into the server and those server side calls were similar
in nature, then there was a chance that the server would have crashed when
one or more of the connections making external environment calls closed.
This problem has now been fixed.
================(Build #4176 - Engineering Case #655981)================
Values for the ApproximateCPUTime property is never expected to decrease
between calls, as it represents an estimate of accumulated CPU time for a
connection. However, for connections that had accumulated approximately
1000 seconds of CPU time, the counter could have periodically receded by
approximately 400 seconds.
================(Build #4175 - Engineering Case #550725)================
A query with many predicates (original or inferred) was slower than some
earlier versions, due to many semantic transformations in the parse tree.
This has been fixed.
================(Build #4174 - Engineering Case #656272)================
The INSERT ON EXISTING SKIP statement did not report the correct number of
inserted and
updated rows using @@rowcount and sqlcount. This has now been corrected.
================(Build #4174 - Engineering Case #655749)================
Execution of an INSERT ... ON EXISTING SKIP statement did not report the
correct number of inserted and updated rows using @@rowcount and sqlcount.
This has been fixed
================(Build #4173 - Engineering Case #655972)================
The fix for Engineering case 636018 missed a case, which has now been corrected.
Description of case 636018:
Queries involving indexes containing long values could have returned incorrect
results. Index corruption was possible, but not likely..
================(Build #4173 - Engineering Case #654938)================
In rare cases, a corrupt TCP packet could have caused the server to crash.
The server now validates the packet header before do anything with the packet.
If it is corrupt, the packet is dropped.
================(Build #4170 - Engineering Case #654790)================
In very rare cases, the server may have crashed with a floating point exception
when slightly loaded. This has been fixed.
================(Build #4168 - Engineering Case #654284)================
The server could have crashed if the STOP SERVER or STOP ENGINE statement
was called from an event or HTTP connection. This has been fixed.
Note that the 'STOP SERVER' syntax is new to version 12 (older servers support
'STOP ENGINE').
================(Build #4168 - Engineering Case #654259)================
The changes for Engineering case 650489 may have caused execution remote
procedure calls to an ASE remote server to fail with a strange "unchained
transaction mode" error. This problem has now been fixed.
================(Build #4167 - Engineering Case #653591)================
Attempting to attach tracing to an older version database file could have
caused the server to crash. This has been fixed so that attempting to attach
tracing to an older version file now returns the error "ATTACH TRACING
could not connect to the tracing database" (-1097).
================(Build #4167 - Engineering Case #653590)================
Diagnostic tracing, or application profiling to LOCAL DATABASE could not
be used when the server was started with the command line option -sb 0 (disable
broadcast listener). This has been corrected. A workaround is to manually
supply a connection string (ATTACH TRACING TO <connstr>) with the DoBroadcast=NO
option, rather than using the LOCAL DATABASE clause.
================(Build #4167 - Engineering Case #556778)================
The return values of the built-in functions user_id() and suser_id() may
have incorrectly been described as not nullable even if the function argument
was not nullable. This may have lead to the assertion error 106901 "Expression
value unexpectedly NULL in write". This has been fixed so that the functions'
results are always described as nullable.
================(Build #4166 - Engineering Case #653588)================
If tracing was suddenly detached (because, for example, the server receiving
the tracing data was shut down) at the same time as a deadlock occurred,
a deadlock victim may have failed to write a ROLLBACK to the transaction
log. This may have lead to an incorrect partial commit of a deadlocked transaction.
This has been fixed. This problem is expected to be very rare.
================(Build #4166 - Engineering Case #652911)================
If an INSTALL JAVA UPDATE statement was executed to update an existing java
class, the server would have incorrectly added a new system object id rather
than reuse the already assigned object id. This problem has now been fixed.
================(Build #4166 - Engineering Case #635956)================
A query with a CUBE, ROLLUP, or GROUPING SETS clause and HAVING predicates
may have returned an incorrect result set. The query must not have had any
aggregate functions, and the grouping sets must have contained the grand
total which should have been filtered by the HAVING predicates, but instead
it was returned as a valid row.
For example:
select n_comment from nation group by cube (n_comment) HAVING n_comment
like 'alw%';
The result set would have contained all the rows with n_comment for which
the predicate "n_comment LIKE 'alw%' is TRUE, but also the row "(NULL)".
This has now been fixed.
================(Build #4165 - Engineering Case #653052)================
Using the system procedure xp_sendmail with an attachment larger than about
55 kB may have resulted in the attachment being corrupted. This has been
fixed.
================(Build #4164 - Engineering Case #652791)================
If a statement for a directory access table failed with the error SQLSTATE_OMNI_REMOTE_ERROR,
and this statement was the last statement of the transaction then all subsequent
remote server statements of this connection would have failed with the same
error. This has been fixed.
================(Build #4163 - Engineering Case #652543)================
The server may have crashed during inserts into a view, if the view column
was not a base table column. This has been fixed. Now the correct error SQLSTATE_NON_UPDATEABLE_VIEW
is returned.
================(Build #4162 - Engineering Case #652411)================
If an error occurred accessing the tape drive when beginning a tape backup,
the BACKUP statement may have hung. This has been fixed.
================(Build #4162 - Engineering Case #652253)================
In some rare cases, when run on HP, AIX, and Solaris systems the server may
have crashed on shutdown. This has been fixed.
================(Build #4162 - Engineering Case #652107)================
If a foreign key had both ON UPDATE and ON DELETE actions, renaming a column
referenced by the foreign key could have caused one of the system triggers
to be deleted and the other to be left unchanged. A trigger for an ON UPDATE
action could have been converted to an ON DELETE action. This has been fixed.
================(Build #4162 - Engineering Case #651694)================
If the connections between servers in a mirroring system used encryption,
the primary server could have hung when performing an operation which required
exclusive access to the database (e.g. a checkpoint) if other update activity
was also occurring. This has been fixed.
================(Build #4162 - Engineering Case #639107)================
In a mirroring system it was possible for the mirror server to get the error:
"*** ERROR *** Assertion failed: 100904 (10.0.1.4075) Failed to redo
a database operation (page number and offset) - Error: Table in use".
This could also have occurred when dropping a global temporary table, or
during database recovery, without using a high availability environment.
This has been fixed.
================(Build #4160 - Engineering Case #651729)================
In some rare cases, the server may have hung if diagnostic tracing had been
enabled. This has been fixed.
================(Build #4159 - Engineering Case #650740)================
Execution of a DROP DATABASE statement would have failed if the automatically
generated database alias name was an invalid identifier. This has been fixed.
================(Build #4158 - Engineering Case #651029)================
On Linux builds where the kernel was compiled to support something other
than 1024 processors, the database server could have failed to detect the
correct processor geometry and could have crashed. This problem has been
fixed. Note that recent Linux kernels have been built with support for up
to 4096 processors.
================(Build #4158 - Engineering Case #650489)================
If an application made a remote procedure call that made changes to a remote
database and then subsequently called ROLLBACK, the changes would not have
been rolled back on the remote database that the remote procedure call affected,
but would have been rolled back locally and on the other remote databases
that the local connection modified. This problem has now been fixed.
================(Build #4157 - Engineering Case #650829)================
The Validate utility, or the VALIDATE utility, could have reported spurious
orphaned blobs if there were indexes containing long values. This has been
fixed.
================(Build #4156 - Engineering Case #649797)================
An index containing long values could have become corrupted if the table
was subsequently altered to add columns, remove columns that did not appear
in the index, or change the nullability of a column not appearing in the
index. Also, for this to have happened, entries must have been deleted from
the index. This has been fixed.
================(Build #4156 - Engineering Case #647154)================
The denial-of-service attack addressed by the changes for Engineering case
610115 could still have occurred if idle timeout had been turned off on the
server using the command line option -ti 0, or the system procedure sa_server_option('IdleTimeout',0).
If the idle_timeout value was not 0, the server was not susceptible. This
has now been corrected.
================(Build #4156 - Engineering Case #646431)================
In timing dependent cases, the server could have hung with 100% CPU usage.
This has been fixed.
================(Build #4154 - Engineering Case #649928)================
Sending large attachment files via SMTP using the system Procedure xp_sendmail()
may have crashed the server. This problem was introduced by the changes made
for Engineering case 643590, and have now been fixed.
================(Build #4151 - Engineering Case #649795)================
Queries over indexes could have returned incorrect results if long index
entries (greater than ~240 bytes) appeared in the index, with index corruption
a possibility. This has been fixed.
================(Build #4150 - Engineering Case #648518)================
In very rare cases, the server may have crashed if the cache was low on memory
and a SELECT statement contained a very large IN list predicate. This has
been fixed. The server will now return the error SQLSTATE_SYNTACTIC_LIMIT.
================(Build #4150 - Engineering Case #648179)================
The server could have entered a state where it would consume 100% of a single
CPU (ie. one 'core') and never leave that state. The problem was caused by
a race condition when more than one thread simultaneously attempted to reference
a foreign key index for the very first time; however, the effects of the
race condition may not be observed until the server attempts to shut down.
This problem has been fixed.
================(Build #4149 - Engineering Case #648493)================
If Perseat licensing was used, the error "Database server connection
limit exceeded" may have been reported when it should not have. In order
for this have occurred, in addition to Perseat licensing, the -gm server
option, or http connections to disabled databases, must have also been used.
When this problem occurred, the first time the error was reported was correct
behaviour, but after disconnecting connections, the error may have continued
when it should not have. This has now been fixed.
================(Build #4149 - Engineering Case #647682)================
Key constraint checking and validation errors were possible when indexing
long index values if the relative position of the corresponding index columns
(foreign and primary) within their respective tables were not identical.
This has been fixed.
================(Build #4148 - Engineering Case #640821)================
It was possible to get the following validation errors:
Page x of database file "<database file name>" references
a table (y) that doesn't exist
or
Orphaned page (x) found in database file "<database file name>".
The database server could have left some pages in a state where they cannot
be reused. The database would have continued to function normally in this
state but it is possible to regain the lost pages by rebuilding the database
file. This most likely would have occurred in a non-system dbspace, and has
now been fixed.
================(Build #4146 - Engineering Case #647663)================
A server running as the primary in a mirroring system could have hung when
the mirror server was started. This was more likely to occur after the fix
for Engineering case 637057 was applied. This has been fixed.
================(Build #4145 - Engineering Case #649475)================
If a JDBC application connected via jConnect called the method DatabaseMetaData.getSchemas(),
then the server would have failed the request with the error "the 'FileVersion'
property is no longer supported". This problem has now been fixed and
the proper list of userids is now returned to the application.
================(Build #4145 - Engineering Case #645801)================
In an environment with each server in a mirroring system having two network
connections, each on one of three separate networks, so that a failure in
one of the networks would still allow two of the nodes to communicate, a
network outage could have resulted in both partner servers acting as a primary
server. This has been fixed.
================(Build #4144 - Engineering Case #647495)================
On recent versions of Linux (with SELinux enabled), programs with executable
stacks were forbidden. The program would have failed to start with an error
like:
dbeng12: error while loading shared libraries: libdbserv12_r.so: cannot
enable executable stack as shared object requires: Permission denied
This would have potentially happened with any SQL Anywhere binary, and has
now been fixed.
A work around is to either disable SELinux, or run execstack -c on the problematic
binaries.
================(Build #4144 - Engineering Case #647331)================
Execution of an extremely complicated remote query that
needed to be processed in either no passthrough or partial
passthrough mode, could have resulted in a server failure.
The server now properly returns error -890.
================(Build #4144 - Engineering Case #646703)================
In rare circumstances, reading a substring of a value from a compressed column
(not starting at the first byte) could have caused assertion failure 201501
- "Page ... for requested record not a table page". Note that the
Interactive SQL utility (dbisql) fetches long values in pieces, so selecting
the value using dbisql (without using substrings) may cause this problem.
This only happens on compressed columns with blob indexes. This has been
fixed.
================(Build #4144 - Engineering Case #643642)================
Revoking all table permissions from a grantee that were granted by a particular
grantor did not always remove the corresponding SYSTABLEPERM row. This has
now been fixed.
================(Build #4143 - Engineering Case #647187)================
When attempting to insert a string into a proxy table that was the result
of calling a builtin function, if the builtin function returned an empty
string, then there was a chance that the Remote Data Access layer would have
inserted a NULL value instead. For example, a statement like:
INSERT INTO my_proxy_table(my_column) SELECT RTRIM( ' ' )
may have inserted NULL instead of '' into to my_proxy_table. This problem
has now been fixed.
================(Build #4143 - Engineering Case #645664)================
Attempting o unload and reload a database that contained a proxy table with
a unique constraint would have failed with the error: "feature 'alter
remote table' not implemented ". This problem has now been fixed. The
"alter table...add unique" statement is no longer unloaded for
proxy tables.
================(Build #4143 - Engineering Case #643763)================
Execution of a query block that output a string constant could have caused
the server crash if the optimizer chose a parallel execution plan. The likelyhood
of such a crash increased under high server load and when the query occurred
inside a stored procedure. This problem has now been fixed.
For version 12.0.0, this problem was most likely to be encountered when
using string literals in different blocks of a Union, as follows:
SELECT 'String1', col1, col2 FROM table1 WHERE predicate1
UNION
SELECT 'String2', col1, col2 FROM table2 WHERE predicate2
For versions prior to 12.0.0 this problem was much more obscure, and likely
required a constant string occurring both in a non-simple output expression
and in a WHERE clause predicate.
This has been fixed.
================(Build #4142 - Engineering Case #646830)================
In very rare cases, the server may have crashed using long identifiers in
SQL statements. This has been fixed.
================(Build #4141 - Engineering Case #646687)================
If a large number of concurrent connections simultaneously executed remote
queries that required partial or no passthrough processing, and several of
the queries made heavy usage of aliases, then the server could have crashed.
This problem has now been fixed.
================(Build #4139 - Engineering Case #645926)================
If an Open Client or jConnect application attempted to prepare and execute
a statement with a large number of parameters, then the server would have
failed the request, or in rare cases, could have crashed. This problem has
now been fixed.
================(Build #4138 - Engineering Case #644508)================
A SQL Anywhere HTTP procedure may have failed when configured with a PROXY
clause to connect through an Apache forwarding proxy version 2.0.X. This
has been fixed. Changes have also been made to improve WebClientLogging (-zoc)
messages when connecting through a proxy.
================(Build #4137 - Engineering Case #645468)================
In rare situations, the value for Index Statistics reported in graphical
plans may have been incorrect. This has been fixed.
================(Build #4137 - Engineering Case #644526)================
If a long index entry (the equivalent of a 250 character or longer ASCII
string) was deleted from an index, there was the possibility of index corruption
and the server crashing. This has been fixed.
================(Build #4137 - Engineering Case #643936)================
Unexpected column names could have been reported for complex expressions
in the SELECT list of a statement. The problem mostly affected queries over
views, for which the name of the base table column, rather than the name
of the view column, could have been reported.
For example, consider the following table and view:
CREATE TABLE admin_group.employee(
pk INTEGER PRIMARY KEY,
fname CHAR(100) NOT NULL,
lname CHAR(100) NOT NULL,
cname CHAR(100) );
CREATE VIEW admin_group.v AS
SELECT e.fname AS first_name, e.lname AS last_name,
e.cname AS company_name
FROM admin_group.employee e;
In the query:
SELECT <expr> FROM admin_group.v;
the following expressions would have been described with the base table
column names:
CAST( first_name AS VARCHAR(100))
(first_name)
This has been fixed so that both of the expressions above will now be described
as 'first_name'.
Additionally, expressions such as ISNULL( <col1>, <col2> ) could
have been described differently depending on the nullability of the first
column. For example, ISNULL( first_name, company_name ) would have been described
as 'fname', whereas ISNULL( company_name, first_name ) would have been described
as 'isnull( employee.fname as first_name,employee.cname as company_name)'.
For consistency, both of the above expressions will now be described by unparsing
the expression.
================(Build #4136 - Engineering Case #595494)================
When running on Windows Vista or later, if the server encountered a fatal
error it was possible to see a Windows crash dialog as well as a "Send
Error Report" dialog. This has been fixed.
================(Build #4135 - Engineering Case #644491)================
A bypass query that contained an invalid cursor range could have caused the
server to crash in certain conditions. This has been fixed.
Note: please see the section "Query processing phases" for a definition
of bypass queries.
================(Build #4132 - Engineering Case #643456)================
If ALTER TABLE was used to reduce the length of a string column to less than
the value of the INLINE or PREFIX values for that column, and then the database
was unloaded, the reload script would have contained CREATE TABLE statements
that would be rejected by the server.
This has been fixed so that the ALTER TABLE statements will now fail.
================(Build #4131 - Engineering Case #643802)================
A web procedure that references another computer by name may have failed
to connect if both machines supported IPv6, but the web server on the remote
computer was not listening on any IPv6 addresses. This has been fixed.
================(Build #4131 - Engineering Case #643590)================
When using the include_file parameter of the external system procedures xp_sendmail
and xp_sendmail, they may have failed depending on the length of the file.
This has been fixed.
================(Build #4130 - Engineering Case #643596)================
When executing a query that involved window functions, proxy tables and dotted
references, if the query was invalid due to a missing GROUP BY reference,
then there was a chance the server would have failed to return the error.
In some cases, the server would even have crashed. This problem has now been
fixed.
Note that this fix is a follow-up Engineering case 641477.
================(Build #4130 - Engineering Case #643587)================
The server may have hung while processing data for encrypted connections.
This has been
fixed.
================(Build #4130 - Engineering Case #634181)================
The amount of data in CHAR, NCHAR or BINARY variables could have exceeded
the declared length of the variable when data was appended using the string
concatenation operator (||). This has been fixed.
================(Build #4129 - Engineering Case #643355)================
Setting the value of an OUT parameter in an external stored procedures would
have persisted, even without calling the set_value() function. This has been
fixed.
================(Build #4129 - Engineering Case #643317)================
Canceling a call to xp_sendmail(), xp_startmail(), xp_stopmail(), xp_startsmtp(),
or xp_stopsmtp(), may have caused a server crash. The external stored procedures
that manage SMTP mail state did not protect against the case were two threads
can try to access the same SMTP state. This has been fixed.
================(Build #4129 - Engineering Case #643314)================
Canceling an external stored procedure may have caused the server to crash.
This has been fixed.
================(Build #4129 - Engineering Case #643286)================
A mirror server could have crashed if multiple errors occurred on startup.
A mirror server uses -xp, and the crash could have occurred if the database
failed to start and the TCP/IP protocol failed to start. This has been fixed.
================(Build #4128 - Engineering Case #642524)================
The server could have become unresponsive when processing index scans in
which a residual predicate continually rejected candidate rows for the duration
of the scan. This has been fixed.
================(Build #4126 - Engineering Case #641360)================
The server may have returned an incorrect result set for a query that contained
a GROUP BY clause with distinct arguments, and the GROUP BY was executed
using the low memory strategy. This has been fixed.
================(Build #4122 - Engineering Case #641487)================
If a server was started with -o <file name>, then stopped and immediately
started again with the same -o <file name>, the server could have failed
to start with the errors "Invalid database server command line"
or "Can't open Message window log file: <file name>". This
failure was rare and timing dependent, and has now been fixed so the second
server will successfully start.
================(Build #4122 - Engineering Case #641095)================
The changes for Engineering case 635618, could have caused an INSERT statement,
using the CONVERT() function to convert a string to a time, to fail assertion
111704 - 'Attempting to store invalid time value in table {table name}, column
{column name}. This problem did not occur if CAST was used in place of CONVERT,
and has now been fixed.
================(Build #4122 - Engineering Case #640901)================
Revoking table column permissions may have failed with the SQL error 'Permission
denied: you do not have permission to revoke permissions on "Column1"'
if there were column permissions granted from multiple grantors. This has
been fixed.
================(Build #4122 - Engineering Case #637897)================
On SUSE 10 systems, the server could have failed to start a database if the
database file was mounted on an NFS share. The error given would be something
like:
"Error: Database cannot be started -- /mnt/share/demo.db is not
a database"
This has now been fixed.
================(Build #4122 - Engineering Case #636801)================
Unloading a version 9 database with a table with named primary key constraint
could have failed if the primary key was referenced by an index hint in a
view. This has been fixed.
Also, unloading a version 10 or later database containing a table with a
primary key index that had been renamed would have failed to preserve the
new name for the index. This has been fixed.
================(Build #4119 - Engineering Case #640411)================
Statistics about the Disk Transfer Time (DTT) of additional dbspaces were
not loaded at database startup, so they were not available for the optimizer
to generate better plans. This has been fixed.
================(Build #4117 - Engineering Case #640621)================
Depending on timing, stopping a server with the Stop Server utility (dbstop)
and immediately restarting it with the Start Server in Background utility
(dbspawn) could have return the error:
DBSPAWN ERROR: -85
Communication error
The communication error could also have occurred if the server was started
without dbspawn. This has been fixed.
================(Build #4117 - Engineering Case #640240)================
Execution of an ATTACH TRACING statement with a LIMIT clause, either by size
or by time, would generally have failed to limit the size of the trace captured.
This has been fixed.
================(Build #4115 - Engineering Case #639656)================
In some cases the Start Server in Background utility (dbspawn) could have
returned the generic error, -80 (Cannot start server), instead of returning
the real error. In other cases, the server could have crashed on shutdown.
This has been fixed.
================(Build #4113 - Engineering Case #639238)================
In very rare cases, doing full validation on a table may have caused the
server to crash. For this to have occurred, the following conditions had
to hold:
1) Validation was being done online.
2) The table contained blobs.
3) Table blobs were being heavily modified by other concurrent requests.
4) The right timing happened between the validation process and the blob
update
process.
This has been fixed.
================(Build #4113 - Engineering Case #639159)================
In some cases, calling the system procedure sa_get_request_times() may have
caused the server to crash. This has now been fixed.
================(Build #4113 - Engineering Case #638207)================
A LOAD TABLE statement would have failed assertion 111706 "Attempting
to store invalid string value in table "{table name}", column "{column
name}" if the table had a column with user datatype uniqueidentifier.
The problem only happened if a user datatype was used. This has been fixed.
To fix such tables in existing databases the table needs to get recreated
or a database upgrade needs to be run.
================(Build #4111 - Engineering Case #639016)================
Attempting to execute queries that used the FOR XML clause, may have caused
the server to crash when failures were encountered while fetching data.
This has been fixed.
================(Build #4111 - Engineering Case #638835)================
Calls to the system function property('platform') would have returned 'WindowsVista'
when the server was running on Windows 7, Windows 2008, or Windows 2008
R2. This has been fixed.
================(Build #4111 - Engineering Case #638482)================
If diagnostic tracing was enabled on a database and a query used intra-query
parallelism, the server may have crashed. This has been fixed.
================(Build #4111 - Engineering Case #638477)================
In extremely rare circumstances, servers answering queries with keyset cursors
may have become unstable, leading to an eventual crash. This has been fixed.
================(Build #4111 - Engineering Case #637988)================
If an incorrect password was supplied in the saldap.ini file, the server
could have hung when attempting to register with LDAP. Also, SA client libraries
could have hung when using LDAP to find servers. This has been fixed.
================(Build #4111 - Engineering Case #637881)================
When executing a remote query that required partial or no passthru processing,
and the query made heavy usage of aliases, then the server could have incorrectly
returned error "-890 statement size or complexity exceeds server limits".
This problem has now been fixed and the -890 error will now only be returned
if the statement size or complexity really does exceed server limits.
================(Build #4111 - Engineering Case #637874)================
When computing the VARIANCE, VAR_SAMP, VAR_POP, STDDEV, STDDEV_SAMP, or STDDEV_POP
functions, the server could have incorrectly returned a negative value or
NULL. This could have happened if the data was in a non-exact numeric column
(that is, of type DOUBLE or FLOAT) and there was extremely little actual
variance across the values. It most likely could only have happened when
all the values were exactly the same. This has now been fixed.
A workaround is to adjust the value over which the variance function is
computed so that a tiny amount of variance is introduced. For example, instead
of:
SELECT VARIANCE(mycolumn) FROM mytable
use:
SELECT VARIANCE(mycolumn + 0.00000001*myprimarykey) FROM mytable
================(Build #4111 - Engineering Case #637745)================
If an application executed a remote statement, and the remote statement required
the server to execute the statement in either partial or no passthrough mode,
then there was a chance the server would have crashed when the statement
was overly complex, or if the server cache was exhausted. This problem has
now been fixed by reporting an error in this situation.
================(Build #4111 - Engineering Case #637620)================
In rare circumstances, the server could have crashed while handling multiple
TLS connections. This has been fixed.
================(Build #4111 - Engineering Case #637340)================
If a Unix server was started with a server name longer than 32 bytes, shared
memory connections to it may have been dropped. This has been fixed.
================(Build #4111 - Engineering Case #637125)================
If an application executed a remote query that required the server to make
a remote connection to another SA database, then there was a very rare chance
that the server would have incorrectly failed the remote connection with
the error: "unable to connect, server definition is circular".
This problem has now been fixed.
================(Build #4111 - Engineering Case #636018)================
Queries involving indexes containing long values could have returned incorrect
results. Index corruption was possible, but not likely. This problem has
now been fixed.
================(Build #4111 - Engineering Case #634883)================
Connections which had communication compression enabled could have been dropped,
resulting in the "Connection was terminated" error. This was more
likely to occur if the connection had both communication compression and
simple encryption enabled. If the server -z log and the client LOGFILE log
was used, the message "Failed to decompress a compressed packet"
would have appearred in one or both of the logs when this problem occurred.
This has been fixed.
================(Build #4111 - Engineering Case #634728)================
If a simple statement had one of the following forms and a table hint was
used in the FROM clause, it was possible for subsequent statements from the
same connection with the same form, but with different hints, to use the
hints from the earlier statement.
1) SELECT {table columns} FROM {table} WHERE {primary key col1 = val1,
primary key col2 = val2, ... }
2) UPDATE {table} SET ... WHERE {primary key col1 = val1, primary key col2
= val2, ... }
3) DELETE FROM {table} SET ... WHERE {primary key col1 = val1, primary
key col2 = val2, ... }
This has been fixed. As a work-around, the statements can be change to include
"OPTIONS( FORCE OPTIMIZATION )", or the server can be started with
the following command line switch: "-hW AllowSimpleUserCache".
================(Build #4107 - Engineering Case #631484)================
Execution of an ALTER TABLE statement, could have corrupted the table after
deleting and committing some rows from it. This has now been fixed.
================(Build #4106 - Engineering Case #637037)================
When a stored procedure was invoked through the Microsoft SQL Server Linked
Server mechanism using an "EXEC" statement and specifying parameters,
the call would have failed with a syntax error. The following is an example
of a SQL Server query that is forwarded to a SQL Anywhere server:
SELECT * FROM openquery(SALINK, 'exec test_proc 1')
This problem has been fixed. When parameters are present in the SQL query,
the statement is passed unchanged to the server. When no parameters are present,
the OLE DB provider rewrites the "exec" statement using CALL and
appends parameter marker place holders (in order to support ADO's ADODB.CommandTypeEnum.adCmdStoredProc).
================(Build #4106 - Engineering Case #636660)================
SQL Anywhere web server required that the last boundary of a multi-part/formdata
HTTP request be terminated with a carriage-return line-feed. This restriction
has now been relaxed, the server will now accept the last boundary as valid
even if it is not terminated with a CR/LF.
================(Build #4106 - Engineering Case #636572)================
If an application executed a remote query, and the query involved an IF or
CASE expression in the select list, then the query would always have been
processed in partial or no passthru mode, even if there was only one remote
server involved. This restriction has now been relaxed such that remote queries
containing IF or CASE expressions in the select list will now be executed
in full passthru whenever possible, but only if the remote server is another
SA server.
================(Build #4106 - Engineering Case #635803)================
Diagnostic tracing databases, or databases created by the automatic Application
Profiling Wizard, would have failed to start if the original database had
auditing enabled. This has been fixed.
A workaround is to temporarily disable auditing on the main database, create
the tracing database, and then re-enable it.
================(Build #4106 - Engineering Case #622184)================
All CALL statements had the same hash signature when captured by diagnostic
tracing, or the Application Profiling wizard. Now, the name of the procedure
is incorporated into the signature. This means that the Summary view of
captured statements will contain one entry for every procedure, rather than
a single entry for all procedures, which makes it easier to identify procedures
that need to be looked at for performance reasons.
================(Build #4105 - Engineering Case #636307)================
A simple UPDATE statement that affected a large number of rows could have
consumed memory proportional to the number of rows if the statement used
one of the following features:
- results from a user-defined function with numeric Expression Caching
(any data type)
- a LIKE predicate
- a CAST of a string to an approximate number (REAL, DOUBLE, or FLOAT)
- the SORTKEY or COMPARE builtin function
- the REMAINDER or MOD builtin functions with arguments of type NUMERIC
or DECIMAL
- the MEDIAN aggregate function
- a spatial data type
If the memory usage exceeded what was allowed for one connection, the statement
would have failed with a dynamic memory exhausted error. This has been fixed.
================(Build #4105 - Engineering Case #635618)================
When converting a string to a time using the CONVERT function and an explicit
format-style, SQL Anywhere 10.0 and above could have rejected conversions
permitted by earlier versions.
For example, the following statement is accepted by version 9.0, but rejected
by version 10.0 and above:
select convert( time, '11:45am', 14 ) tm_conv
The behavour of converting from strings to TIME changed from version 9.0
of SQL Anywhere to version 10.0 and later, with version 10.0 and later applying
the same rules that conversions from string to timestamp used. The string
'11:45am' does not precisely match the format style 14 (hh:nn:ss:sss) because
it contains an "am" indicator that is not present in the style.
Parsing of formatted time strings has been enhanced so that the time portion
of a string is accepted provided that it matches the format [hh:nn:ss.ssssssAA].
The time string must specify the hour digits, but all other time parts are
optional. The AM/PM indicator is always accepted whether or not time parts
are omitted. Note that this now permits up to six digits to represent microseconds
after the seconds. This change affects the conversion of string to TIME and
also to TIMESTAMP, so there is a consistent parsing. The following is rejected
after this change, even though it was accepted in 9.0:
select convert( time, '1991-02-03 11:45', 101 )
The string does not match the style format 101 (mm/dd/yyyy).
Further, in some cases it was possible to generate invalid timestamps with
string conversions. This has also been fixed.
================(Build #4101 - Engineering Case #635120)================
In exceptionally rare conditions, the server may have crashed while reading
a row of a table that had a very large number of columns. This has now been
fixed.
================(Build #4100 - Engineering Case #633773)================
The method used for an internal database server timer on Linux to support
request timing (-zt option,) and row access times in the graphical plan with
statistics, was unreliable. This has been fixed.
================(Build #4099 - Engineering Case #634327)================
The server may have hung while running the Validation utility (dbvalid),
or the equivalent VALIDATE DATABASE statement. This was only possible if
multiple connections were open to the database, at least one of which is
doing DDL (such as an ALTER TABLE statement), and a checkpoint or connection
attempt was made during the validate. This has now been fixed.
Note, it is recommended that the database server not be servicing other
connections while database validation is taking place.
================(Build #4098 - Engineering Case #633753)================
If an application deleted a row from a table with a unique index, then subsequently
called an external environment procedure, and the external environment procedure
then re-added the row using the server-side connection, the application would
have received an assertion failure (200112) message on rollback. This problem
has now been fixed.
================(Build #4097 - Engineering Case #632353)================
If the server acting as the primary server in a mirroring system was shut
down at the same time as it lost quorum due to a dropped mirror connection,
the database on the primary could have been improperly checkpointed, resulting
in a failure to recover on the next startup. Also, if a mirror server was
starting at the same time the primary server was stopping or restarting,
the mirror server could have received log operations that were not written
on the primary. This would have resulted in an "incompatible files"
message the next time the mirror connected to the primary, and would have
forced the database and log to be manually recopied. Both of these problems
have now been fixed.
================(Build #4096 - Engineering Case #634330)================
Kerberos server principals needed to be of the form: server_name@REALM (for
example myserver@SYBASE.COM). There was no way to specify a Kerberos server
principal of the industry standard form: server_name/hostname@REALM (for
example myserver/mymachine.sybase.com@SYBASE.COM). Now the Kerberos server
principal can be specified with the server -kp option. The server principal
specified by -kp must have been extracted to the Kerberos keytab file on
the machine running the database server. Note that only one of -kp or -kr
can be specified.
-kp dbengX/dbsrvX server option:
Specifies the Kerberos server principal and enable Kerberos authenticated
connections to the database server.
Syntax:
-kp server-principal
Applies to:
all OSes except Windows Mobile
Remarks:
This option specifies the Kerberos server principal used by the database
server. Normally, the principal used by the database server for Kerberos
authentication is server-name@default-realm, where default-realm is the default
realm configured for the Kerberos client. Use this option if you want to
use a different server principal, such as the more standard format server-name/hostname@myrealm.
If OpenClient or jConnect Kerberos authenticated connections are made to
the server, the server principal must be specified by the application (see
SERVICE_PRINCIPAL_NAME for jConnect).
The -kr option cannot be specified if the -kp option is specified.
Specifying this option enables Kerberos authentication to the database server.
See also:
<same list as -kr option documents, with the addition of the -kr option>
Example:
The following command starts a database server that accepts Kerberos logins
and uses the principal myserver/mymachine.sybase.com@SYBASE.COM for authentication.
dbeng12 -kp myserver/mymachine.domain.com@MYREALM -n myserver C:\kerberos.db
================(Build #4093 - Engineering Case #633747)================
Unsetting the public option Oem_string would have caused the server to crash.
This has been fixed.
================(Build #4093 - Engineering Case #632875)================
The server would have crashed if a client application attempted to connect
while the
server was shutting down after failing to start. This has been fixed.
================(Build #4093 - Engineering Case #629056)================
Attempting to connect with the connection parameter DatabaseName (DBN), but
not DatabaseFile (DBF), to a database that was not running on a network server
could have incorrectly resulted in the error "Request to start/stop
database denied". This error could have also occurred on the personal
server if the -gd option was used. This has been fixed so that this now results
in the "Specified database not found" error.
================(Build #4089 - Engineering Case #632342)================
Under rare circumstances, the server may have hung while diagnostic tracing
was enabled. This has been fixed.
================(Build #4089 - Engineering Case #623779)================
Servers running databases with large schemas may experience periods of unresponsiveness
at idle checkpoint time. The performance of checkpoints has been improved
to reduce the length of this interval.
================(Build #4088 - Engineering Case #633229)================
Execution of a SELECT ... INTO table_name statement would have failed with
"Syntax error near '('" if the source query contained UNIQUEIDENTIFIER
columns and the statement was attempting to create a permanent table. This
has been fixed.
================(Build #4088 - Engineering Case #628573)================
The system procedure xp_startsmtp may have returned error code 104 depending
on the SMTP server being used. This has been fixed.
================(Build #4087 - Engineering Case #633021)================
When using the external system procedure xp_startsmtp, if the SMTP authentication
failed the server would not have closed the TCP connection to the SMTP server.
This has been fixed.
================(Build #4087 - Engineering Case #633015)================
If an application called a Java external environment procedure that returned
result sets, then those result sets would not have been cleaned up for a
long time after the application was done with them. The result sets now get
cleaned up in a more timely fashion.
================(Build #4087 - Engineering Case #624801)================
An HTTP protocol option specifying a port with no value would have started
a listener on the next available port. Specifying a port with no value, or
providing a value of zero, is no longer accepted. All protocol options that
take a numeric value will no longer accept an empty value as a zero default.
================(Build #4085 - Engineering Case #632438)================
When running the Unload utility to create a new database with the same settings
(dbunload -ar), it may have immediately failed with the error "Too many
connections to database being replaced". This would have been rare,
and retrying the unload would likely have resulted in success. This has been
fixed.
================(Build #4084 - Engineering Case #632362)================
If a connection set the dedicated_task option to 'On', then there was a chance
a request for this connection would have hung. This was more likely for connections
where many requests are sent one after the other. This has been fixed.
================(Build #4084 - Engineering Case #632315)================
The START JAVA statement would have failed when the server was started through
the GUI (DBLauncher) on Mac OS X 10.6. This has been fixed. Servers started
via the command line interface (Terminal.app) do not have this problem.
================(Build #4082 - Engineering Case #632050)================
If a Java external environment had been started for a particular database,
and a connection on that database accidentally attempted to drop the SYS.DUMMY
table, then the connection would have hung instead of giving the expected
"permission denied" error. This problem has now been fixed.
Note that this problem does not exist for external environment other than
Java.
================(Build #4082 - Engineering Case #631897)================
In extremely rare timing dependent cases, if a communication error occurred
on a connection with the dedicated_task option set in a mirroring configuration,
the server could have crashed, asserted or hung. The fix for Engineering
case 628436 missed this situation, which has now been fixed.
================(Build #4080 - Engineering Case #631475)================
Calls to some system procedures may have caused an server crash if null arguments
were used. This has been fixed.
================(Build #4079 - Engineering Case #630226)================
If an ALTER TABLE statement had a DROP or ALTER column clause, and the column
did not exist,
then an incorrect column name could have been reported in the error message.
This only happened if there was another ADD, ALTER or DROP column clause
in the statement. This has been fixed.
================(Build #4079 - Engineering Case #627631)================
In rare cases, a database server used for mirroring could have crashed when
the connection to its partner was dropped. This has been fixed.
================(Build #4078 - Engineering Case #631017)================
If an application attempted to create a proxy table to a Microsoft SQL Server
table which contained a varbinary(max) column, then the server would have
incorrectly mapped the varbinary(max) column to varbinary(1). This problem
has now been fixed and the server now correctly maps varbinary(max) columns
to long varbinary.
================(Build #4078 - Engineering Case #630890)================
In very rare situations, the server may have crashed when executing a statement
that contained a large number of UNION, EXCEPT or INTERSECT clauses. This
has been fixed. These statements will now return the sql error "Statement
size or complexity exceeds server limits".
================(Build #4078 - Engineering Case #630376)================
If a database being mirrored had been enabled for auditing and the mirror
servers were restarted, no auditing operations were recorded in the transaction
log. This has been fixed.
================(Build #4078 - Engineering Case #623891)================
1) If:
- the on_tsql_error database option was set to 'conditional' or 'stop'
- the continue_after_raiserror database option was set to 'off'
- a RAISERROR statement was executed in a procedure with an exception handler
- the exception handler executed a RESIGNAL statement
then the procedure's caller would not have been able to obtain the error
code used in the RAISERROR statement by examining the SQLCODE variable. The
SQLCODE value would be -631 (SQLE_RAISERROR_STMT).
2) If:
- the on_tsql_error database option was set to 'conditional' or 'stop'
- the continue_after_raiserror database option was set to 'off'
- a RAISERROR statement was executed in a trigger with an exception handler
- the exception handler executed a RESIGNAL statement
then the error would not have been seen by the statement which caused the
trigger to fire.
This has been fixed. In case 1 above, the value of SQLCODE will now be the
error code used in the RAISERROR statement. In case 2, the error will now
not be suppressed by the trigger.
================(Build #4078 - Engineering Case #495701)================
The server allows an application to raise a customized error by means of
the RAISERROR statement. The server also provides for a built-in global variable,
SQLCODE, whose value can be examined to determine the specific error raised
during the execution of the last statement on the current connection. The
server will now report the correct user specified error number for SQLCODE
instead of a fixed -631.
================(Build #4076 - Engineering Case #630519)================
A query that referenced a view or derived table that contained a select list
item that was not a table column could have caused a crash when executing
using proxy tables. This has been fixed.
================(Build #4076 - Engineering Case #630359)================
The ASE label for the "GBK" character set has been changed from
"CP936" to "cp936", as character set names passed to
ASE APIs such as cs_locale() are case sensitive. The ASE version of character
set labels is generally not used directly by SQL Anywhere, but is provided
to users who need to use ASE libraries. Typically, a client would obtain
the ASE label via a call such as db_extended_property( 'charset', 'ase' ).
================(Build #4073 - Engineering Case #629417)================
If an application attempted to execute a Java external environment procedure,
and the target method was part of a class that had a private constructor,
then calling the Java external environment procedure would have failed with
an IllegalAccessException. This problem has now been fixed.
================(Build #4072 - Engineering Case #629153)================
If an application attempted to start an external environment session, and
other connections were being established, or were closing, at exactly the
same time, then there was a very small chance that the server could have
crashed. This problem has now been fixed.
================(Build #4071 - Engineering Case #629073)================
If a stored procedure or user-defined function contained a statement that
referenced a connection level variable (created with CREATE VARIABLE), then
it was possible for the statement to behave improperly if plan caching was
used by the server. The statement could have used the NULL SQL value for
the variable instead of giving an error if the variable were dropped, and
the statement could have used incorrect type information if the variable
was dropped and then recreated with a different data type. This has been
fixed.
================(Build #4070 - Engineering Case #622875)================
If a procedure or function was simple enough that it was inlined during semantic
query transformations, and the procedure or function contained uses of a
parameter with a different case than the case in the declared parameter list,
then the statement could have failed with an error (column not found). In
versions 10.0.1 and 11, only simple procedures would have had this problem.
In version 12.0.0 (beta), simple user-defined functions could also have exposed
this problem. This has now been fixed.
================(Build #4069 - Engineering Case #628436)================
In extremely rare timing dependent cases, if a communication error occurred
on a mirror or diagnostic tracing server-to-server connection, the server
could have crashed, failed an assertion or hung. This has been fixed.
================(Build #4064 - Engineering Case #627062)================
An INSERT ... ON EXISTING UPDATE DEFAULTS OFF statement did not update columns
defined with DEFAULT LAST USER. This has been fixed.
================(Build #4063 - Engineering Case #627228)================
Under very rare circumstances the server could have crashed at startup while
updating the SYSHISTORY table. This has been fixed.
================(Build #4063 - Engineering Case #627054)================
If the system procedure sa_describe_query() was executed with null as the
query parameter then the server would have crashed. This has been fixed.
================(Build #4061 - Engineering Case #626769)================
If old transaction log files on a primary server were deleted while the server
was running, subsequent BACKUP/RENAME operations would not have resulted
in the copies of these logs on the mirror server being deleted. This has
been fixed.
A workaround is to restart both servers and perform another BACKUP/RENAME.
================(Build #4061 - Engineering Case #624586)================
The Validate Index statement would have placed an exclusive lock on the table,
preventing other connections from accessing the table. Alternatively, the
connection performing the validate could have blocked waiting for exclusive
access to the table. This has been changed so that Validate Index no longer
places an exclusive lock on the table.
================(Build #4060 - Engineering Case #626255)================
If a statement in an event caused a deadlock or blocking error the first
time it was executed, an assertion error (107001 Duplicate table in UPDATE
statement) could have been given the next time the event was executed. Now
an "invalid statement" error is given in this case. A workaround
is to define the body of the event as a procedure and call the procedure
from the event.
================(Build #4059 - Engineering Case #626295)================
If a remote query that involved GROUP BY was executed in no-passthrough mode,
and the server ran into a low memory situation, then there was a chance the
query would have failed with an "update operation attempted on non-updatable
remote query" error. This problem has now been fixed and the query will
now successfully complete without error.
Note that a workaround for this problem is to increase the amount of memory
that is available to the server.
================(Build #4059 - Engineering Case #626151)================
If an application connected to an authenticated server made an external environment
call, and the call took more than 30 seconds to complete, then the application
would have hung. The check for ensuring that an external connection was properly
authenticated was incorrect, and has now been fixed.
================(Build #4056 - Engineering Case #625493)================
If an application connected using a version of jConnect that did not support
bigtime, and the application subsequently prepared a statement that consisted
of a batch of insert and select statements, then there was a chance the server
would have incorrectly inserted a value of 00:00:00.0 for the time value
if one of the parameters to the insert was of type time. This problem has
now been fixed.
================(Build #4055 - Engineering Case #625353)================
Code that attempted to prevent a divide-by-zero condition may have caused
the server to crash. This has now been fixed.
================(Build #4055 - Engineering Case #624991)================
If a table was created with a primary key column declared as GLOBAL AUTOINCREMENT
when the global_database_id option was set to 0, a performance warning claiming
that the column was not indexed would have been written to the server console.
This has been fixed.
================(Build #4055 - Engineering Case #624404)================
If an event made a call out to the Java external environment, then the Java
environment would have leaked memory with every iteration of the event. The
result would have been an eventual 'out of memory' exception from the Java
VM. This problem has now been fixed.
================(Build #4055 - Engineering Case #623281)================
Doing absolute fetches from a cursor would have slowed down after one or
more tables had been updated many times. Restarting the server would have
resolved the problem. This has been fixed.
================(Build #4051 - Engineering Case #624179)================
If an application executed an INSERT statement that contained a file name
with escaped single quotes as follows:
INSERT INTO directoryTab(file_name, contents) VALUES( 'he''l''lo.txt',
0x0 )
where directoryTab was a directory access table, then the resulting file
would incorrectly have been named "he'l'lo.txtxt", instead of the
expected name "he'l'lo.txt". This problem has now been fixed.
================(Build #4051 - Engineering Case #624047)================
Validating or unloading an empty table could have caused the server to fail
an assertion when the database had been started read-only. This would only
have happened if the table contained an index, and a truncate table had just
been done. This has been fixed.
================(Build #4049 - Engineering Case #623769)================
If a TDS based application using a multi-byte character set, connected to
an SA database using a single-byte character set, subsequently fetched a
char(n) or varchar(n) value, and the char/varchar value resulted in greater
than n bytes when converted to the client's multi-byte character set, then
the client would have received an incomplete value. This problem has now
been fixed.
================(Build #4048 - Engineering Case #623432)================
The database server could have leaked memory in rare circumstances when strings
were being accessed concurrently. This has been fixed.
================(Build #4048 - Engineering Case #621829)================
If an application attempted to insert data into a proxy table, and one of
the columns was an nchar based column, then there was a chance the data would
have been truncated. This problem has now been fixed.
Note, when creating proxy tables to Oracle tables that contain varchar2
columns, the Oracle ODBC driver does not provide enough information for SQL
Anywhere to correctly map the varchar2 columns to nvarchar columns. It is
therefore strongly recommended that an explicit column list be used when
creating proxy tables to Oracle tables containing varchar2 columns, and that
the explicit column list appropriately maps the varchar2 columns to nvarchar
columns.
================(Build #4048 - Engineering Case #615617)================
If an application connected via jConnect attempted to retrieve the column
metadata of a result set that contained a varbit, long varbit, nchar, nvarchar
long nvarchar, or uniqueidentifier column, then the column metadata would
have been returned with an unknown datatype. This problem has now been fixed.
================(Build #4045 - Engineering Case #622552)================
A misconfigured SQL Anywhere webservice function may have caused the server
to crash when the function was executed. The problem was specific to a function
declaration (not a procedure) that was configured as TYPE 'HTTP:POST:<mimetype>'
(ie. mimetype = text/xml) that declares, but does not utilize, all substitution
parameters. This has been fixed.
The following illustrates the problem, note that the clause consuming the
substitution parameter is commented out:
create function bad_params(str long varchar, len int)
returns long varchar
url 'http://127.0.0.1/no_serv
================(Build #4045 - Engineering Case #622512)================
If an application was connected via jConnect 7 or Open Client 15.5, and the
application fetched a datetime value, then the fractional seconds portion
of the value would have been returned with six digits of precision; however,
fetching a timestamp value would still have returned 1/300th of a second
precision. This problem has been fixed and fetching either datetime or timestamp
values using jConnect 7 or Open Client 15.5 will now return the full six
digits of precision.
================(Build #4045 - Engineering Case #622021)================
Following the fix for Engineering case 588740, the server could have performed
slowly when deleting large numbers of rows concurrently. This has been fixed.
================(Build #4045 - Engineering Case #621822)================
A SQL Anywhere webservice client procedure may have truncated an HTTPS response
under certain circumstances. This has been fixed.
================(Build #4044 - Engineering Case #620795)================
The function count_set_bits may have returned a number that was too large,
if a bitwise NOT operation had previously been applied to the operand. This
has been fixed.
================(Build #4042 - Engineering Case #621827)================
Same machine TCPIP broadcasts did not work correctly on Mac OS 10.6. This
means that it may have been possible to start multiple database servers with
identical names on the same machine when using Mac OS 10.6. This has now
been fixed.
================(Build #4042 - Engineering Case #621665)================
If an application was connected via jConnect or Open Client, then the connection
name for that TDS based connection would have been empty. This has now been
fixed and the connection name for TDS based connections will now default
to the application name.
================(Build #4041 - Engineering Case #621162)================
A SQL Anywhere 'RAW' web service having been defined with AUTHORIZATION ON,
would have failed to the service when it had an AS NULL statement. This
has been fixed.
Note, service types: HTML, XML, RAW, JSON may contain a NULL statement only
with AUTHORIZATION ON, DISH services always contain a NULL statement and
SOAP services must contain a (non-NULL) statement.
================(Build #4039 - Engineering Case #620977)================
If many databases were started for mirroring on a single server, the server
could have hung after running for 30 minutes or more. This has been fixed.
See also Engineering case 617811.
================(Build #4039 - Engineering Case #620474)================
If the primary server (S1) in a database mirroring environment was running
in a VM and the VM was paused or otherwise inactive for sufficient time that
the mirror server's connection to S1 was dropped, causing a failover to the
mirror (S2) , then when S1 was resumed it would not have realize that a failover
has occurred and would have continued to act as a primary. This has been
fixed.
================(Build #4037 - Engineering Case #619976)================
The server could have crashed when executing an aggregate function that operated
on string data when the Group By operator was forced into a low-memory strategy.
This has been fixed.
A workaround is to increase the amount of memory available to the server.
================(Build #4036 - Engineering Case #619950)================
The Unload utility (dbunload) was failing to add the length information for
VARBIT user domain definitions in the reload.sql file. This has been fixed.
================(Build #4035 - Engineering Case #619190)================
A base table with publications was not allowed to be used in any parallel
access plan. This has been fixed. Now, a table with publications cannot
be used in a parallel plan of a statement if the table is updatable in that
statement, and it has publications.
================(Build #4034 - Engineering Case #619357)================
When an application attempted to make an external environment call, there
was a very small chance the server would have crashed if the external environment
for that connection shut down at exactly the same time as the application
made the external environment call. This problem has now been fixed.
================(Build #4034 - Engineering Case #619113)================
In very rare timing depended circumstances, the server may have crashed when
querying connection properties for a connection in the process of disconnecting.
This has been fixed.
================(Build #4034 - Engineering Case #609706)================
When running, the database cleaner could have interfered wth transactions
by causing locking attempts to fail. This has been fixed by having the requesting
transaction wait for the cleaner.
================(Build #4033 - Engineering Case #619338)================
Attempting to execute a SELECT statement that referenced a stored procedure
in the FROM clause could have caused the server to crash. This has been fixed.
================(Build #4029 - Engineering Case #619128)================
The server could have failed an assertion, or returned a spurious error,
if a query used a keyset cursor or if a keyset cursor was implicitly used
in the processing of a DELETE or UPDATE statement. For this to have occurred
there must have been concurrent updates (with respect to the lifetime of
the keyset). This was most likely to have happened if a global share by
all temporary tables was involved. If no temporary tables were involved,
only DELETE statements were likely to cause issues. The error most likely
to be seen was 'unable to find in index'; assertions included 101412 and
200502 (among others). This has been fixed.
================(Build #4029 - Engineering Case #619054)================
If the execution of a DELETE statement involved remote tables, and the DELETE
statement could not be handled in full passthru, then the server could have
failed assertion 201501 "Page for requested record not a table page
or record not present on page". This problem has now been fixed, and
a proper error message is returned.
================(Build #4029 - Engineering Case #618587)================
An ALTER DATABASE CALIBRATE DBSPACE TEMPORARY may have caused the server
to fail assertion 200501. This has been fixed.
================(Build #4027 - Engineering Case #618257)================
In some cases, operations on long strings (blobs) could have leaked memory
in the main heap. This memory would not have been reclaimed until the server
was restarted. In order for this problem to have occurred, the blob must
have been at least 8 database pages long, and must have been accessed using
a random-access interface such as byte_substr() with a starting offset of
at least 3 times page size. This has been fixed.
================(Build #4027 - Engineering Case #617811)================
If a database server was started with many databases (e.g. 16) that were
configured for database mirroring, the mirror server could have hung, causing
the primary server to also hang until the mirror server was stopped. This
has been fixed. A workaround is to increase the value for the -gn option
from its previous setting (default 20) to a value 2 times the number of mirrored
databases.
================(Build #4027 - Engineering Case #617662)================
If an application connected to a case sensitive database executed a remote
query that contained a Group By clause, and one of the columns referenced
in the Group By had a different case than the column reference in the select
list, then the server would have incorrectly failed the query with error
-149 "Function or column reference must also appear in a GROUP BY."
For example, the following query would have failed:
SELECT test.Column1 FROM proxy_t test GROUP BY test.column1
whereas the following queries:
SELECT test.column1 FROM proxy_t test GROUP BY test.column1,
and
SELECT test.Column1 FROM proxy_t test GROUP BY test.Column1
would have succeed. This problem would only have occurred if the local database
was case sensitive and proxy_t was a proxy table. This has now been fixed.
================(Build #4025 - Engineering Case #618459)================
If a primary server (S1) was somehow frozen for long enough that its connections
exceeded the liveness timeout, and then exited the frozen state, the loss
of its connection to the mirror server would cause it to send a stale status
to the arbiter which should have been disregarded, but was not. Restarting
S1 would result in it attempting to become the primary server if a connection
to the second mirror server (S2) could not be made, yielding either two primary
servers or an alternate server name conflict. This has been fixed. Stale
state information will now be disregarded when received.
================(Build #4025 - Engineering Case #617619)================
If multiple backup statements for the same database were executed concurrently
with the WAIT BEFORE START option specified, and there was at least one connection
with uncommitted operations, the server could have appeared to hang or run
very slowly. This problem has been fixed.
================(Build #4025 - Engineering Case #595276)================
Procedures containing XML generation functions (XMLAGG, XMLELEMENT, etc.)
that were simultaneously executed by large numbers of connections, could
have caused the server to crash. This has been fixed.
A workaround is to rewrite procedures that cause this behaviour to use the
XML generation functions as EXECUTE IMMEDIATEs with a trim. For example,
:
CREATE PROCEDURE FOO()
BEGIN
SELECT XMLELEMENT('foo');
END;
could be rewritten as:
CREATE PROCEDURE FOO()
BEGIN
EXECUTE IMMEDIATE trim('SELECT XMLELEMENT(''foo'')');
END;
Note that such rewritten procedures will no longer be able to take advantage
of plan caching.
================(Build #4024 - Engineering Case #617804)================
Attempting to execute an INSERT statement with the WITH AUTO NAME clause
could have caused the server to crash. This has been fixed.
================(Build #4023 - Engineering Case #617640)================
Use of a timestamp that had a number of seconds with more than 9 digits after
the decimal place could have yielded unexpected results.
For example:
select datepart( ms, '14:44:33.9876543211' )
would have returned 128, instead of the expected result of 987.
This has been fixed by truncating the number of seconds in a timestamp at
9 decimal places before it is used.
================(Build #4022 - Engineering Case #619552)================
Queries that used indexed snapshot scans could have returned extra rows.
This has been fixed. See also Engineering case 612617.
================(Build #4022 - Engineering Case #617219)================
If an application connected using Open Client 15.5, and then subsequently
attempted to fetch a Time or Timestamp value, then the fetch would have failed
with a protocol error. This problem has now been fixed.
Note that this problem does not affect versions of Open Client prior to
15.5.
================(Build #4022 - Engineering Case #617177)================
On Solaris SPARC systems, the 32-bit SQL Anywhere libraries were linked against
libC.so.5, the compatibility libC variant, even though the libraries were
not compiled in compatibility mode (i.e., -compat=4 was not used when compiling
the libaries). A C++ application that was not itself linked against libC.so.5
could have crashed when trying to load these libraries. The libraries are
no longer linked against libC.so.5 and now are only linked against libCrun.so.1.
C++ client applications compiled with the -compat=4 compatibility flag, or
linked against libC.so.5, are not supported.
================(Build #4022 - Engineering Case #615212)================
When computing an aggregate function such as AVG() or SUM(), it was possible
for the result of the calculation to overflow the bounds of the data type
used for accumulation, leading to an answer that was not numerically correct.
Even if the option Ansi_integer_overflow was set to 'On', the overflow was
not reported as an error. If AVG() or SUM() overflowed an INT type, then
the argument to the aggregate can be cast to DOUBLE or NUMERIC to avoid the
overflow (with a concomitant performance degradation). In specific conditions,
an arithmetic operation could have caused a server crash. This has been fixed.
================(Build #4022 - Engineering Case #612617)================
If the row containing a particular unique value changed from one row to another,
and then back again, snapshot transactions open before or during the updates
might not return a row when expected, or return two copies of the expected
row. This has now been corrected.
================(Build #4021 - Engineering Case #616985)================
If an application attempted to fetch long string data from a proxy table,
and the ODBC driver being used to connect to the remote server did not support
UNICODE entry points, then there was a chance the fetched data would have
been missing some characters. This problem has now been fixed.
Note, there are very few ODBC drivers that do not support UNICODE entry
points. As a result, this problem affects a very small number of applications
that use remote servers.
================(Build #4019 - Engineering Case #616395)================
The system procedure sa_split_list() did not work as expected when a multi-character
delimiter was provided, and the string to be split was shorter than the delimiter.
No rows were returned, whereas the expected result was a single row containing
the string to be split. This has been fixed.
================(Build #4019 - Engineering Case #612462)================
Queries that contained a subquery that was rewritten by semantic transformations
to flatten subqueries, could have failed with the error "Assertion failure
106104 (...) Field unexpected during compilation". This problem would
only have occurred if the query block that was being flattened had a subquery
in the SELECT list with outer references. This has now been fixed.
Note, a potential workaround is to modify the query so that it is no longer
suitable for the semantic transformation that flattens the query block.
================(Build #4018 - Engineering Case #607651)================
In extremely rare circumstances, fetching a string from a table could have
caused the server to hang. This would only have occurred if the string was
longer than the prefix size of the column, but less than the page size of
the database, and a string manipulation function such as TRIM() was being
used, and another connection was attempting to update the string at the same
time. This has been fixed.
================(Build #4017 - Engineering Case #615627)================
When using snapshot isolation, the Validate utility (dbvalid), or the "VALIDATE
DATABASE" statement, may have spuriously report the error "Database
validation failed for page xxxx of database file". These errors would
then have disappeared after a clean shutdown of the database. This has been
fixed.
================(Build #4016 - Engineering Case #615255)================
When run on Windows CE, the server may have reported an inaccurate reason
when a file error occurred. A server that was using the ICU library (dbicudtnn.dll)
could have reported a general I/O error if the database file did not exist.
A server that was not using ICU could report that a database file did not
exist error when a different file error occurred. This has been fixed.
================(Build #4016 - Engineering Case #613341)================
If an application that was connected using jConnect or Open Client, queried
the metadata of a long nvarchar, nvarchar, nchar, date or time column, then
the metadata returned by the server would have been incorrect. This problem
has now been resolved.
Note, in addition to getting an updated server, an ALTER DATABASE UPGRADE
must be execute on each database, to update the metadata for jConnect and
Open Client applications.
================(Build #4014 - Engineering Case #614632)================
In rare circumstances, calling the system function db_property() to retrieve
a database property from a database other than the one connected to (for
example, when calling sa_db_info()), may have resulted in invalid data being
returned. This would only have occurred if the property being requested returned
a string, and conversion between the character sets of the two databases
was unnecessary. This has been fixed.
================(Build #4014 - Engineering Case #605645)================
On rare occasions, the execution of a VALIDATE DATABASE statement could have
reported spurious orphaned page errors. This has been fixed.
================(Build #4013 - Engineering Case #614405)================
The server could have have an assertion, or crashed, when reinserting a deleted
non-null value into a unique index. In rare cases, database corruption was
possible. System and temporary table indexes were not affected. This has
now been fixed.
================(Build #4013 - Engineering Case #613999)================
OEM Edition servers would have crashed when started with the -fips switch.
This has been fixed.
================(Build #4011 - Engineering Case #613816)================
If an application connects using an older version of jConnect or Open Client,
and subsequently fetches a Time or Timestamp value, then the server is required
to round the fractional seconds portion of the Time/Timestamp value up to
the nearest 1/300th of a second. For these older versions of jConnect or
Open Client, the server would not always have properly rounded the fractional
seconds portion up to the nearest 1/300th of a second. This problem has now
been fixed.
Note that newer versions of jConnect and Open Client support microsecond
precision, so no rounding to 1/300th of second will occur if an application
uses these newer versions.
================(Build #4010 - Engineering Case #612409)================
When connected to a multi-byte character set database, if an application
attempted to create a proxy table to a remote table that had an underscore
in its name, then there was a chance the server would fail the request with
the error "the table specification '<location-string>' identifies
more than one remote table". This problem would only have occurred if
the remote had multiple tables whose names differed only by the character
in the underscore location. For example, if a remote had tables named tab_1
and tabx1, and if the application attempted to create a proxy table to map
to tab_1, then the server would give the "more than one remote table"
error. This problem has now been fixed.
================(Build #4010 - Engineering Case #609701)================
The sample ECC certificate eccroot.crt shipped with versions 9.x and 10.x
expired on November 17, 2009. As a result, the sample server certificate
sample.crt has also expired, since it was signed by eccroot.crt. These have
been replaced by new sample ECC certificates. The new server certificate
is called eccserver.crt, and its password is "test". The file name
for the signing certificate is still eccroot.crt but the certificate itself
is different.
================(Build #4003 - Engineering Case #612094)================
An incorrect response length may have been recorded in the SQLAnywhere HTTP
log for a long lived HTTP connection, such as a pipelined connection wacorrect,
subsequent response lengths were cumulative. The problem occurred when HTTP
logging was enabled and the @L LogFormat specified the logging of the response
length (default). This has now been fixed.
================(Build #4002 - Engineering Case #611611)================
If an application executed a query similar to the following:
select * from T where price * (1 - discount) > 500
and the table T was a remote table, then it was possible the query would
have returned the wrong result. This was due to fact that the Remote Data
Access layer sometimes failed to include the parenthesis when generating
queries to be executed on remote servers. This problem has now been fixed.
================(Build #4001 - Engineering Case #611227)================
The MESSAGE statement did not allow specifying the EVENT, SYSTEM LOG and
DEBUG ONLY clauses at the same time. This has now been corrected.
================(Build #4001 - Engineering Case #610724)================
Problems with an LDAP server could have caused a SQL Anywhere server, or
a client application using it, to hang. Calls to the LDAP library were synchronous,
so if the LDAP server was hung and did not respond, the SA server would have
waited forever for a response. This has been fixed by making the LDAP library
calls asynchronous and adding a timeout.
================(Build #3999 - Engineering Case #610718)================
If an application executed an UPDATE statement, and the UPDATE statement
involved proxy tables, then the server may have crashed when the UPDATE statement
could not be handled in full passthrough mode. This problem has now been
fixed, and a proper error message is returned.
================(Build #3999 - Engineering Case #610505)================
Attempting to renaming an index (or text index) to an invalid name, would
have resulted in unexpected behaviour of following statements related to
the index. This has been fixed.
================(Build #3998 - Engineering Case #610115)================
The database server was vulnerable to a particular type of denial-of-service
attack. This has been fixed.
================(Build #3994 - Engineering Case #608904)================
Additional drive flushing was added to improve recoverability (see Engineering
case 588740); however, this flushing could have made the server significantly
slower when no transaction log was present due to every commit causing a
checkpoint. This performance issue has been addressed by reverting to the
old flushing behaviour when no transaction log is being used.
================(Build #3993 - Engineering Case #608552)================
In rare cases, the executing the ATTACH TRACING statement could have caused
the server to crash. This has been fixed.
================(Build #3992 - Engineering Case #608342)================
A server participating in a mirroring system may, on rare occasions, have
crashed if an
outgoing connection to another server participating in the mirroring system
failed. This has now been fixed.
================(Build #3992 - Engineering Case #606227)================
When running running on HPUX, Solaris or AIX systems, it was possible for
the server to
crash while receiving IPv6 traffic. This has been fixed.
================(Build #3990 - Engineering Case #606651)================
The creation of a proxy procedure for a procedure on a remote server may
have caused a server crash, or failed assertion 201503, if a proxy procedure
with the same name had been dropped as part of the execution of a DROP REMOTE
SERVER statement. This has now been fixed.
A work around for the problem is to drop all proxy procedures belonging
to a remote server before executing the DROP REMOTE SERVER statement.
================(Build #3990 - Engineering Case #596656)================
If an application made an external environment call, and the external environment
procedure subsequently made a server-side call that required acquiring a
lock that was already held by the connection making the external environment
call, then there was a chance the application would hang. This problem has
now been fixed.
================(Build #3988 - Engineering Case #606858)================
A possible, but unlikely, security hole involving secure communications on
MacOS systems has been fixed.
================(Build #3988 - Engineering Case #606835)================
The reason reported by the server for failing to start a database may have
been incorrect. When attempting to open a file, the server will retry on
certain errors. If it retries too many times it just reports 'database not
found'. This behaviour was much more likely with the changes for Engineering
case 605413, as sharing violations were now retried. This behaviour has now
been changed so that the server reports the last OS error when it fails to
open the database file.
================(Build #3986 - Engineering Case #606038)================
If an application attempted to create a proxy table to a Micosoft SQL Server
table that contained a varchar(max) or nvarchar(max) column, then the server
would have incorrectly mapped the varchar(max) columns to varchar(1) and
the nvarchar(max) columns to nvarchar(1). This problem has now been fixed
and the server now correctly maps varchar(max) columns to long varchar and
nvarchar(max) columns to long nvarchar.
================(Build #3985 - Engineering Case #595999)================
With versions of the server that included the changes made for Engineering
case 555808, queries with a recursive union could have failed to match rows
on the recursive passes. Although there was nothing wrong with the fix itself,
the changes exposed the underlying problem, which has now been fixed.
A workaround is to drop indexes on the table(s) being queried recursively,
although there may a performance implications to doing this, which could
be significant.
================(Build #3984 - Engineering Case #605653)================
If a REORGANIZE TABLE statement failed due to the table having been locked,
then subsequent attempts to execute a REORGANIZE TABLE syatement would have
also failed. The error would have been that a reorgorganize was already
in progress. This has been fixed.
================(Build #3984 - Engineering Case #605414)================
On very rare occasions, if the number of allowed connections was exceeded,
the HTTPS server may have sent the "Service temporarily unavailable"
503 response in plaintext. This has been fixed.
================(Build #3984 - Engineering Case #605413)================
If the server attempted to open a database file concurrently with antivirus
software, the database could have failed to start, or the server could have
failed with an assertion error. This has been fixed by adding a retry for
sharing violations on a file open.
================(Build #3984 - Engineering Case #605393)================
The value being maintained for the CacheFree property was not as documented
and was of limited use. The value now returned is the number of cache images
that contain no useful data. The values for the properties CacheFree+CachePinned+CacheFile
should give the current cache size (i.e. number of images currently in the
cache). The values for the properties CacheFile+CacheFree should give an
upper bound on the number of pages immediately available for reuse (without
resorting to growing the cache).
================(Build #3983 - Engineering Case #596419)================
When autostarting a database, database-specific options (such as -ds) which
had values containing quotation marks were not handled correctly. For example,
the following would not have worked correctly:
dbisqlc -c "dbf=my.db;dbs=\"d:\tmp\spacey path\""
This problem has now been corrected.
Note that using quotation marks on the command line to start a database
server worked correctly:
dbeng11 my.db -ds "d:\tmp\spacey path"
A related problem was found and fixed in dbisqlc which handled the START
DATABASE statement itself, and constructed a connection string containing
a "dbs=-ds ..." parameter, rather than passing the START DATABASE
statement to the server. Dbisqlc was not putting quotes around a -ds parameter
that contained spaces.
================(Build #3982 - Engineering Case #587856)================
An application that was connected via jConnect or Open Client, that attempted
to insert or retrieve a date value prior to January 1,1753, would have been
incorrectly received an invalid date value error for the insert, and would
have been returned the date January 1, 1753 for the fetch. This problem has
now been fixed. Note that application must use newer versions of jConnect
or Open Client in order to get date support for dates prior to January 1,
1753. Also, the restriction of January 1, 1753 using jConnect or Open Client
still exists for datetime values.
================(Build #3979 - Engineering Case #595699)================
In very rare cases, the Windows and Linux server may have hung and stopped
processing requests if request level logging was turned on. This has been
fixed.
================(Build #3978 - Engineering Case #595504)================
If an authenticated application connected to an authenticated database and
executed an external environment call, then there was a chance the external
call would fail with an authentication violation error. This problem has
now been fixed.
================(Build #3978 - Engineering Case #583560)================
If an application that was connected to a local server, either via the SQL
Anywhere ODBC driver or the iAnywhere JDBC driver, attempted to perform a
wide or batched insert into a proxy table and the insert subsequently returned
with a conversion or some other error part-way through the insert, then the
rows leading up to the error would have been inserted into the proxy table
twice. This problem has now been fixed.
================(Build #3977 - Engineering Case #593472)================
If a subquery contained an equality predicate with an outer reference, and
the left and right expressions of the equality predicate had different domains,
then the computed result set may have been incorrect. The equality predicate
must have been of the form "local column = outer reference column".
This problem has now been fixed.
For example:
select * from R, S
where R.X NOT IN ( select T.X from T where T.N = S.I)
where the column T.N is of type numeric and the column S.I is of type integer.
================(Build #3977 - Engineering Case #593334)================
The error "Fatal error: Could not write to file" could have been
returned from the server when attempting to write to a file in a clustered
environment. While the clustering service was performing some tasks, it
was possible that the database server would be given an error ERROR_NOT_READY
when attempting to perform an operation on the file. The server now retries
the operation several times in this circumstance.
================(Build #3977 - Engineering Case #592887)================
Some database corruptions could have caused the cleaner to attempt to reference
pages beyond the end of the database. This situation is now caught, and the
server will halt with assertion failure 201301.
================(Build #3977 - Engineering Case #591546)================
When the server executed a CREATE VIEW statement, and the view's SELECT statement
referenced a materialized view that was not yet initialized, the statement
would have failed with the error "Cannot use materialized view 'viewname'
because
it has not yet been initialized". The script generated by dbunload
-n could have failed trying to recompile views. This has been fixed.
================(Build #3975 - Engineering Case #594528)================
In very rare situations, the server could have failed assertion 104908 at
shutdown. This has been fixed.
================(Build #3973 - Engineering Case #592860)================
On Unix systems, starting the server as a daemon could have hung if a fatal
error occurred while starting up. This included Linux Standalone and Network
services installed with the Service utility (dbsvc) as 'automatic on startup'.
This has been fixed.
================(Build #3972 - Engineering Case #593428)================
If an application executed a query containing a large number of proxy tables
on a 64-bit server, and the query ended up being executed in NO PASSTHRU
mode, then there was a chance the server would have failed assertion 101508
instead of giving the "syntactic limit exceeded" error. This problem
has now been fixed, and the "syntactic limit exceeded" error is
now properly returned.
================(Build #3972 - Engineering Case #592589)================
Some computed bitstring values (i.e. those produced as a result of a set_bit,
&, |, ^ or ~) might not have hashed properly. Operations that can hash
bitstring values during their execution (for example, select distinct of
a bit column) could have returned incorrect results. This has been fixed,
but existing tables containing affected values will require an unload/reload.
Alternatively, if c is an affected column in table t, "update t set
c = ~c" can be run twice with a server containing the fix.
================(Build #3969 - Engineering Case #592912)================
In some cases, the database server was not able to fully recover from a crash,
and displayed an assertion failure message. The server console would have
shown that the server was able to recover the database, and a checkpoint
was completed successfully, but then assertion failure 100920 was displayed:
"Transaction log page X is corrupted." This problem has now been
fixed.
================(Build #3968 - Engineering Case #590692)================
The server may have modified the wrong table when executing an UPDATE or
DELETE on a view, if the view was specified in the FROM table-list as well.
This has been fixed.
================(Build #3968 - Engineering Case #589624)================
Some string operations, involving concatenation and substring on compressed
columns, may have caused the fetch request to hang forever. This has been
fixed.
================(Build #3965 - Engineering Case #591061)================
If a database had a partial write to the checkpoint log, then it was possible
that database recovery could have failed in a case which was actually recoverable.
This only affected encrypted databases. This has now been fixed.
================(Build #3965 - Engineering Case #591001)================
In very rare circumstances, the server may have crashed when it should have
returned the sql error SQLSTATE_SYNTACTIC_LIMIT. This may have occurred when
loading very compley view definitions, or executing a SELECT INTO into table
statement. This has been fixed.
================(Build #3963 - Engineering Case #590591)================
If the server was started on Netware, and multiple connections that had made
Java calls shut down at the same time, then there was a chance the server
would have crashed. This problem has now been fixed.
================(Build #3963 - Engineering Case #590041)================
If query optimization with matching materialized views generated an error
while processing a materialized view candidate, the error was still returned
to the application. For example, if a materialized view candidate contained
additional tables for which the user did not have SELECT permissions, the
error "Permission denied: you do not have permission to select from
"tablename" would have been returned. This has been fixed. Now,
if an error is encountered while processing a materialized view candidate,
the error is ignored and the view is not used in the view matching process.
================(Build #3963 - Engineering Case #587671)================
The server may have crashed when trying to find matching materialized view
candidates. This would have happened when a materialized view candidate had
a very complex SELECT clause, and the server was close to stack overflow
or had too little cache space. This has been fixed.
================(Build #3962 - Engineering Case #590156)================
The server may have incorrectly rewritten WHERE, ON, or HAVING clauses, causing
no rows, or too few rows, to be returned. This would have happened when the
server found redundant conjuncts and try to remove them. This has been fixed.
A sample of this type of query:
select 1 from T
where a = 1 and ( b = 2 or c = 8 ) and ( d = 4 or e = 10 )
and ( a = 1 or e = 7 or c = 9 )
================(Build #3961 - Engineering Case #589802)================
Applications would have been able to connect to a database using the database's
alternate server name, and then create, stop, and drop other databases on
the same server. This has been fixed, all of these operations are now disallowed
when connected through an alternate server name.
================(Build #3961 - Engineering Case #589762)================
The server would have crashed if the system functions db_property('LogFileFrangments')
or sa_db_properties() were executed against the utility database. This has
been fixed.
================(Build #3961 - Engineering Case #589646)================
The number of bytes required to store a bitstring column value could have
been under reported. This could then have potentially caused buffer overruns
in client applications. This has been fixed so that the correct byte suze
is now reported.
================(Build #3960 - Engineering Case #588924)================
If a timestamp column was defined with "not null default [utc] timestamp",
and an insert specified a null value for that column, then the insert would
have failed with a SQL error. In version 9 and earlier, the insert used the
default instead and did not fail. This behaviour has now been restored.
================(Build #3957 - Engineering Case #568264)================
The server could have hung, consuming CPU, while attempting to shutdown a
database if all workers were busy servicing other requests on different databases.
This has been fixed.
A workaround is to increase the -gn value of the server.
================(Build #3956 - Engineering Case #588740)================
We learned that in the interest of improved performance, Microsoft Windows
explicitly prevents certain documented methods of guaranteeing that data
has been written to the physical storage medium from working on IDE/SATA/ATAPI
drives (SCSI drives are unaffected). Recoverability after a power outage
could be compromised. The database server now performs additional operations
to flush data to disk to improve recoverability. In testing, there was no
measurable performance degradation by this change.
Relevant third-party articles:
http://perspectives.mvdirona.com/2008/04/17/DisksLiesAndDamnDisks.aspx
http://msdn.microsoft.com/en-us/library/dd979523%28VS.85%29.aspx
http://research.microsoft.com/apps/pubs/default.aspx?id=70554
http://groups.google.com/group/microsoft.public.win32.programmer.kernel/browse_frm/
thread/4590ed3a4133828f/406cfb3a9deae044
================(Build #3956 - Engineering Case #588720)================
If an application started Java, then subsequent connections to the server
may have found that the option settings for date_format, time_format, timestamp_format
and date_order were different than expected. This problem has now been fixed.
================(Build #3956 - Engineering Case #588692)================
On Unix systems, if the transaction log for a primary server was not located
in
the server's current working directory, and was renamed when the mirror
server was unavailable and the primary server was restarted, synchronization
would have failed when the mirror server then became available again. This
has been fixed.
================(Build #3956 - Engineering Case #588539)================
If an application attempted to access a JDBC based Remote Data Access server,
and the tsql_variables option was set to ON, then the server would have failed
the request with a syntax error. This problem has now been fixed.
================(Build #3956 - Engineering Case #588498)================
If a server attempted to abandon or cancel a request to start an external
environment, due to the server being extremely busy or overloaded, then there
was a very small chance the server would have hung. This problem has now
been fixed.
================(Build #3956 - Engineering Case #588329)================
If an application connected via jConnect used the method CallableStatement
to execute a stored procedure, then there was a chance the connection would
have terminated with a protocol error. This problem would only have occurred
if the stored procedure had an argument named @p# where # was an integer
between 0 and n-1 with n representing the number of arguments in the stored
procedure. For example, if an application connected via jConnect used CallableStatement
to execute a stored procedure named test, and if test had 3 arguments, and
if one of the arguments in test was named @p0, @p1 or @p2, then the server
would have dropped the connection with a protocol error. This problem has
now been fixed.
It should be noted that this problem does not affect applications using
the iAnywhere JDBC driver.
================(Build #3952 - Engineering Case #586829)================
UPDATE and DELETE statements could have acquired intent locks on tables that
were not modified by the statement, possibly introducing concurrency or deadlock
problems to an existing application. This has been fixed.
================(Build #3951 - Engineering Case #586362)================
If an application made use of a global temporary table in an external environment
via a server-side connection, then there was a chance that the server may
have failed an assertion, hung, or crashed, when the application connection
closed. This problem has now been fixed.
================(Build #3950 - Engineering Case #587214)================
When HTTP client debugging was turned on, calling the HTTP client procedure
would have resulted in a request time-out. This problem was introduced by
the changes made for Engineering case 565244, and would have occurred when
client logging was initiated, either with the -zoc command line option, or
by calling the system procedure sa_server_option(), to set the 'WebClientLogFile'
file name. This has now been fixed.
================(Build #3950 - Engineering Case #586180)================
Under certain circumstances, the server could have failed an assertion, or
crashed, while
processing queries containing large strings. This has been fixed.
================(Build #3948 - Engineering Case #586837)================
On Unix systems, the 64-bit server may have failed to start when the virtual
memory user limit (ulimit -v) was low relative to the sum of the physical
memory and swap space. This has been fixed.
================(Build #3947 - Engineering Case #586629)================
OPENXML namespaces may have caused the server to crash. This has been fixed.
================(Build #3945 - Engineering Case #585979)================
Using the BCP utility to insert data into a table could have failed with
a protocol error, if the version of Open Client that BCP used was older.
This problem has now been fixed.
================(Build #3944 - Engineering Case #585713)================
Under rare circumstances, the server may have hung when run on Solaris systems.
This was more likely to have occurred on machines with greater parallelism
(e.g., Niagara chips) under a highly concurrent load. This has been fixed.
================(Build #3944 - Engineering Case #584125)================
Specifying an IPv6 address with a port number for the HOST option, eg. LINKS=tcpip(host=(<ipv6
address>):<port>) would give an SQLE_CONNECTION_ERROR (-832) with
the text "Unexpected error 10" or "Mismatched braces near
'???'". This only happened when parentheses were used for both the TCP
parameter and the HOST option. This has been fixed.
As a workaround, use braces to specify the TCP options, eg. LINKS=tcpip{host=(<ipv6
address>):<port>}".
================(Build #3943 - Engineering Case #585282)================
If an application connected via Open Client or jConnect, and requested a
packet size greater than 512 bytes, the SQL Anywhere server would have restricted
the packet size to a maximum of 512 bytes. This restriction has now been
relaxed such that the maximum Open Client or jConnect packet size is now
4096 bytes for a version 11.0.1 or above server, and 1024 bytes for a version
10.0.1 server.
================(Build #3941 - Engineering Case #584152)================
A statement-level "after update of" trigger would not have fired
if the columns in the "of" list were modified by another trigger,
but were not present in the set clause of the originating query. This has
been fixed.
Note, row level triggers do not have this problem.
================(Build #3941 - Engineering Case #559258)================
The stored procedure debugger failed to report information about the size
of string and numeric data types when debugging a stored procedure. This
has been corrected.
================(Build #3940 - Engineering Case #583241)================
Under some specific circumstances, the server could have crashed while updating
tables with triggers. This has been fixed so that execution of UPDATEs now
takes place correctly.
A workaround is to recreate the table under question.
================(Build #3939 - Engineering Case #584517)================
When validating a database, either using the VALIDATE DATABASE statement,
or with the Validate utility (dbvalid -d), an error would not have been reported
if a transaction log page was encountered in the database file. This has
been fixed.
Note, this situation would have been due to file corruption on the disk.
Transaction log pages are not stored within the database.
================(Build #3939 - Engineering Case #584512)================
When validating databases, using the VALIDATE DATABASE statement or the Validation
utility 'dbvalid -d'), spurious errors could have been reported under rare
circumstances. This has been fixed.
================(Build #3939 - Engineering Case #584304)================
If an application executed an external environment procedure and subsequently
canceled the external call, then all other external environment requests
would have blocked until the external procedure being cancelled acknowledged
the cancel request. This problem has now been fixed.
================(Build #3934 - Engineering Case #581986)================
Executing certain illegal operations on catalog tables could have caused
the server to crash after the statement was rejected with an appropriate
error. This has been fixed so that the server now works correctly.
================(Build #3932 - Engineering Case #582756)================
The changes for Engineering case 544187 caused the server to have a limited
number of temporary files that it could create in the lifetime of the server
process (64K on non-UNIX systems, 4G on UNIX systems). The problem would
have shown up as an error writing to an invalid handle, and a fatal "disk
full" error would have been reported. On Windows, the message in the
server console would have looked like the following:
A write failed with error code: (6), The handle is invalid.
Fatal error: disk full when writing to "???"
This problem has been fixed.
A related problem was also fixed where failure to create a temporary file
for the utility_db would have cause the server to crash.
================(Build #3928 - Engineering Case #581980)================
If an application executed a statement of the form:
SELECT ... INTO LOCAL TEMPORARY TABLE localtemptab FROM extenv_proc()
where extenv_proc() was an external environment procedure that returned
a result set, then the SELECT ... INTO statement would have succeeded, but
attempting to subsequently access the newly created localtemptab would have
resulted in a 'table not found' error. This problem has now been fixed.
================(Build #3927 - Engineering Case #581752)================
If the mirror server in a mirroring system failed to start (e.g. due to a
TCPIP port conflict), it could have crashed when shutting down. This has
been fixed.
================(Build #3926 - Engineering Case #581586)================
If the -dt server command line option was used to specify a directory where
temporary files were to be created, the server did not use that location
when attempting to clean up old temporary files previously left behind when
a server did not shut down cleanly. This has been fixed.
A workaround is to specify the temporary file location using one of the
SATMP, TMP, TMPDIR or TEMP environment variables.
================(Build #3925 - Engineering Case #581249)================
If an application attempted to create a proxy table using the CREATE EXISTING
TABLE statement, and the remote server specified in the location clause does
not exist, then the server may have given a strange syntax error instead
of giving the "remote server not found" error. This problem has
now been fixed and the proper error will now always get returned.
================(Build #3925 - Engineering Case #580390)================
If both a BEFORE UPDATE and a BEFORE UPDATE OF trigger were defined on the
same table, and the BEFORE UPDATE OF trigger was not the first to fire when
an update was performed using INSERT ... ON EXISTING UPDATE, then the BEFORE
UPDATE OF trigger would not have fired at all. This has now been fixed.
================(Build #3925 - Engineering Case #580146)================
If an application attempted to drop a user that had externlogins defined,
then the server would have given an error indicating that externlogins were
defined. The server behaviour has now been changed to quietly drop externlogins
as part of dropping a user.
================(Build #3924 - Engineering Case #580773)================
The server could have hang in some very rare circumstances, when there was
considerable temporary file growth. This has now been fixed. A workaround
is to pre-grow the temporary file using the ALTER DBSPACE statement, or to
create a DiskSpace event to periodically grow the temporary file.
================(Build #3923 - Engineering Case #580772)================
Executing a remote query that involved the use of the ROWID() function, where
the table specified to ROWID was a remote table, would very likely have caused
the server to crash when the query was processed in partial passthru,. This
problem has now been fixed.
================(Build #3922 - Engineering Case #580598)================
Tools such as the Data Source utility (dbdsn), which need to run as an elevate
process to perform privileged operations on Windows Server 2008 R2, failed
to do so and would have reported errors when run in an unelevated command
shell. This has been fixed.
================(Build #3921 - Engineering Case #577928)================
It was possible for the server to crash when executing an UPDATE or DELETE
statement for particular execution plans. This has been fixed.
================(Build #3920 - Engineering Case #580004)================
When describing arguments of a stored procedure called with non-positional
arguments, no information was returned from the server.
For example:
CREATE PROCEDURE foo ( IN a INTEGER, OUT b INTEGER )
BEGIN
SET b = a * 2
END
No information was returned when describing input arguments for the statement
"call foo( b=?, a=?)". This has been fixed.
================(Build #3918 - Engineering Case #577297)================
If a procedure was used in the FROM clause of a query, it was possible for
the server to crash if the procedure result set had a particular form. This
has been fixed.
================(Build #3917 - Engineering Case #578153)================
If a database created with a version 9 or earlier server was backed up via
the Backup utility (dbbackup), or the BACKUP DATABASE statement, the backup
copy could not have been unloaded using the Unload utility from version 10
or later. This has been fixed.
================(Build #3917 - Engineering Case #577548)================
It was possible for the server to apply stale values in an UPDATE statement
if concurrent requests issued UPDATE statements at isolation level 0 or 1.
This would occur with particular timing when the UPDATE statements did not
bypass the query optimizer, and neither UPDATE statement contained a join,
or a subquery converted to a join. This has been fixed.
================(Build #3916 - Engineering Case #579018)================
If an application created a proxy table and specified the remote owner in
the location clause, then attempting to access the proxy table would have
failed with a syntax error if the remote owner name was a keyword. This problem
has now been fixed.
================(Build #3915 - Engineering Case #580168)================
When a NUMERIC or DECIMAL column with precision greater than 47 was fetched
using OLE DB, ODBC, or JDBC, a memory buffer overrun would have occurred.
This would have resulted in a corrupted heap. The client application may
then terminate unexpectedly due to the corrupted heap. This problem has been
fixed.
================(Build #3914 - Engineering Case #578365)================
In extremely rare circumstances, the internal representation of the lower
bound of an index scan could have become corrupted for servers run on 32
bit systems. The problem could have manifested itself in many ways, although
the most likely would have been an incorrect result set being returned. This
has now been corrected.
================(Build #3914 - Engineering Case #578191)================
If a server had request-level logging enabled (eg -zr sql command line option)
while the stored procedure debugger was being used (ie Sybase Central was
in "Debug mode"), the server could have crashed. This problem has
been fixed.
================(Build #3914 - Engineering Case #558286)================
A query may have taken a very long time to execute if it contained an uncorrelated
subquery in an ANY or ALL predicate that referenced a procedure that was
not inlinable. The problem was due to the subquery plan that was generated
not having a work table above the procedure call, so every repeated subquery
execution caused the procedure to reexecute. This has been fixed.
================(Build #3913 - Engineering Case #578130)================
If LOAD TABLE, or OPENSTRING were used, and all of the following conditions
were true:
- ENCODING 'UTF-8' was specified
- the destination table has both CHAR and NCHAR columns
- the CHAR encoding was not UTF-8
- file data that was destined for a CHAR column contained illegal characters
or characters which could not be converted losslessly to the CHAR encoding
the server could have crashed. This problem has been fixed.
================(Build #3909 - Engineering Case #577109)================
If an application maked use of the Java in the database feature, and a JAVA
stored procedure wrapper was defined such that the number of SQL arguments
in the stored procedure was greater than the number of arguments defined
in the Java method signature, then calling the Java stored procedure may
have crashed the server. This problem has now been fixed, and a Java signature
mismatch error will now be returned.
================(Build #3909 - Engineering Case #557677)================
The server automatically collects and updates column statistics during the
execution of DML statements. Earlier versions of the server did the auto
collection only when a single DML statement affected more than a threshold
number of rows, in order to reduce the overhead of collecting statistics
on a relative basis. Support was added in release 10.0.1 to also do some
automatic statistics collection on all DML statements, including those affecting
as few as a single row. A subtle defect in this new support caused the server
to miss some opportunities for automatic statistics collection for UPDATE
statements, as opposed to the INSERT and DELETE statements. This has now
been corrected.
================(Build #3908 - Engineering Case #576913)================
If an application executed a remote query that must be handled in NO PASSTHRU
mode, and the query contained many tables, then it was possible for the server
to fail assertion 101508, instead of giving the "syntactic limit exceeded"
error. This problem has now been fixed, and the error is now properly returned.
================(Build #3907 - Engineering Case #576213)================
If a SELECT statement included an alias specified as a single-quoted string,
and the string contained a double quote or backslash, a syntax error should
have been given, but was not. This has been fixed; an error will be reported
in this case.
================(Build #3907 - Engineering Case #571806)================
The server may have returned the nonfatal assertion error 102605 "Error
building columns" when a query used a complex subselect on the Null
supplying side of an outer join.
For example:
select V1.N1
from T1 V0
left outer join
( select 'M' as N1
from ( select V4.c as xx
from T2 as V4,( select count(*) as N3 from T3 ) V3
where V4.b = V3.N3 ) V2
) V1
on V0.a = V1.N1
This has now been fixed.
================(Build #3906 - Engineering Case #576263)================
If the CREATE DATABASE statement was used to create a database with a filename
that contained a \n (for example CREATE DATABASE 'c:\\new.db'), the file_name
column in the SYSFILE system table could have been incorrect. An incorrect
file_name value would only have occurred if the CREATE DATABASE statement
was issued when connected to a database other than the utility_db. This has
been fixed.
Note, this problem did not affect the Initialization utility (dbinit), or
applications calling the dbtools library.
================(Build #3906 - Engineering Case #575121)================
n exceptionally rare circumstances, the server may have crashed when using
user-defined data types with a DEFAULT clause. This has been fixed.
================(Build #3905 - Engineering Case #576009)================
The size of the stack used by some database server threads was not governed
by the -gs and -gss server command line options. Some of these threads used
stacks that were excessively large (1MB) for a CE environment, and could
have lead to problems loading DLL's on some CE systems. These stack sizes
have been reduced to 64K or less. The default stack size for request tasks
is unchanged at 96K. Some versions did not allow a -gss value of less than
96K, but now values as low as 64K are permitted.
================(Build #3903 - Engineering Case #575323)================
When executing a query that referenced a proxy table in another SQL Anywhere
database, and made use of the newid(), strtouuid() or uuidtostr() system
functions, it may have executed much slower than expected. This problem has
now been fixed.
================(Build #3903 - Engineering Case #574473)================
The conflict() function, used with SQL Remote in RESOLVE UPDATE triggers,
did not return correct results. The result of the function was always FALSE.
This has been fixed.
================(Build #3901 - Engineering Case #574912)================
If a database contained a DISCONNECT event handler called E, and a transaction
log containing a DROP EVENT E statement was applied to the database, recovery
could have reported failed assertion 100904 ("Failed to redo a database
operation ... Event E in use"). This has been fixed.
================(Build #3900 - Engineering Case #574708)================
Executing a query that required a result set to be fetched from a remote
table, could have in very rare situations, caused the server to crash. This
problem has now been fixed.
================(Build #3900 - Engineering Case #574707)================
If an application made an external environment request, and the server was
under heavy load, then there was a chance the external environment would
have taken too long to respond to the request. In such situations, the server
would have timed out the request and returned control back to the application.
If the external environment responded to the request at exactly the same
time the server timed the request out, then there was a very small chance
the server would have hung. This has now been fixed.
================(Build #3900 - Engineering Case #574475)================
Under very rare circumstances, a client-side backup that truncates or renames
the log may have hung the server. This has now been fixed.
================(Build #3899 - Engineering Case #573172)================
The server could have become deadlocked if the 'wait after end' option was
used when performing a backup. This has now been fixed.
================(Build #3898 - Engineering Case #541908)================
For some forms of simple DELETE statements on tables with computed columns,
the server may have returned the following error:
*** ERROR *** Assertion failed: 106104 (...) Field unexpected during compilation
(SQLCODE: -300; SQLSTATE: 40000)
This has been fixed.
================(Build #3897 - Engineering Case #574014)================
In some circumstances, it may have taken longer than expected to shutdown
a database when it was started and then immediately stopped. This would only
have been noticeable if the following three conditions were true:
a) cache collection was enabled on the previous run of the database,
b) cache warming was enabled on the current run of the database,
c) the server accessed a large number of pages during the cache collection
interval, i.e., the first queries executed against the database referenced
a large number of pages (as may be the case in a scan of a large table or
set of tables).
This has been fixed. Note, cache collection is on by default, and is controlled
by the -cc server command line option. A workaround is to disable cache warming
using the -cr- server command line option.
================(Build #3897 - Engineering Case #571215)================
If two grouped queries appeared in different outer joins with an unquantified
expression in the group-by list, it was possible for NULL to be substituted
for the unquantified expression value elsewhere in the query. For example,
the following query could have returned NULL incorrectly for columns A1,
A2:
select today(*) A1, D2.A2, D3.A3
from dummy D1
left join ( select today(*) A2 from sys.dummy group by A2 ) D2 on 1=1
left join ( select today(*) A3 from sys.dummy group by A3 ) D3 on 1=0
This fault could have lead to the following (non-fatal) assertion failure:
Run time SQL error -- *** ERROR *** Assertion failed: 102501 (10.0.1.####)
Work table: NULL value inserted into not-NULL column
SQLCODE=-300, ODBC 3 State="HY000"
This has been fixed.
================(Build #3897 - Engineering Case #562236)================
In some cases, client applications running on Solaris systems may have hung
while communicating with the server through shared memory. Other symptoms
may also have included communication errors between the client and the server.
This was more likely
to happen on multi-processor machines. This has been fixed.
================(Build #3896 - Engineering Case #573636)================
After performing a database backup, a warning message of the form "Unable
to open backup log '...'" could have been sent to the client and console.
Note that this warning came as a "message" to the client, not as
a warning SQLCODE from the BACKUP statement. The problem was far more likely
to have occurred on Windows Vista or later when running as a non-elevated
process, as the server typically tried to write the log into the SQL Anywhere
installation directory which is not typically writable by non-elevated pocesses.
This problem was fixed by properly detecting which of the possible directories
for placement of backup.syb were writable, and adding the %ALLUSERSPROFILE%\SQL
Anywhere XX (XX=version number) directory to the list of possible directories.
On Vista and later, the %ALLUSERSPROFILE% directory is typically C:\ProgramData.
On earlier versions of Windows, it is typically C:\Documents and Settings\All
Users\Application Data.
================(Build #3895 - Engineering Case #573452)================
Fetching a result set may, on rare occasions, have caused an application
that connected using a newer version of Open Client 15 to hang or give a
protocol error. In addition, cancelling a request may also have caused the
application to hang or give a protocol error. These problems have now been
fixed.
Please note that if an application does use Open Client 15 to connect to
SQL Anywhere, then it will be necessary to update the version of Open Client
15 once this fix is installed.
================(Build #3895 - Engineering Case #571284)================
The server may have crashed following execution of an ALTER TABLE statement
that added or modified columns with a DEFAULT or COMPUTE clause, if all materialized
views had been dropped since the last server start. This has been fixed.
================(Build #3894 - Engineering Case #571957)================
The server could have generated a dynamic memory exhausted error when trying
to execute
a very complex statement. This has been fixed. The server will now return
the SQL error
SQLSTATE_SYNTACTIC_LIMIT.
================(Build #3894 - Engineering Case #570098)================
In some circumstances, a 10.x server could have hung at shutdown if a parallel
backup had been performed, or an outbound HTTP connection (SOAP, etc) had
been used. When those features were used on an 11.x server, the server would
not have hung at shutdown, but unpredictable behaviour (crash, hang) could
have occurred at runtime. This problem has been fixed.
Note that the likelihood of encountering the problem with 11.x servers is
extremely small.
================(Build #3894 - Engineering Case #552067)================
The server could have become deadlocked if the 'wait after end' option was
used when performing a backup. For the deadlock to have occurred, there must
have been an active transaction when the backup attempted to truncate the
log, and then a checkpoint and commit must have occurred, in that order on
separate connections, between the time that the backup attempted to truncate
the log and when the backup noticed that there was an active transaction
(a very short time interval). This has now been corrected.
A work around in most cases is to omit the 'wait after end' option, as it
is often not required.
================(Build #3892 - Engineering Case #571628)================
If an application called a stored procedure which in turn called a proxy
procedure, and the local stored procedure generated a warning prior to calling
the proxy procedure, then the server may have returned the error: "remote
server not capable". This problem has now been fixed.
================(Build #3892 - Engineering Case #571611)================
On Linux systems, the server could have used larger amounts of memory than
intended under some circumstances. Affected functionality included external
function calls, the Java external environment, outbound HTTP connections,
and remote data access. This has been fixed.
================(Build #3892 - Engineering Case #570738)================
In very rare, timing dependent cases, a server backup could have hung while
starting up. This has been fixed.
================(Build #3891 - Engineering Case #570093)================
If a SELECT statement contained the clause FOR UPDATE BY LOCKS, and a cursor
was opened without specifying updatability cursor flags from the client application,
then the BY LOCKS specification could have been incorrectly ignored by the
server. This would only have happened if the statement was a simple query
that was processed bypassing optimization. The consequence of this was that
intent locks would not have been taken on the table being scanned. The only
API affected by this problem was Embedded SQL.
Also, if a statemented executing at isolation 3 used an index scan for a
table marked for update, then the rows of the table could have incorrectly
been locked with exclusive (write) locks, instead of intent locks. The consequence
of this was that other connections attempting to read the row could have
blocked, even if the row was ultimately not modified.
These problems have now been fixed.
================(Build #3891 - Engineering Case #567942)================
If a query was processed using a parallel execution plan, it was possible
for the statement to exceed the setting of the Max_temp_space option by a
factor of the number of branches in the plan. This has been fixed.
================(Build #3887 - Engineering Case #570652)================
If a version 10.x or later database had a torn (ie "partial") write
in the checkpoint log, the server could have reported assertion failures,
including 201866 (only on 10.x servers), 201869 (only on 11.x servers), 200502,
200505, or 200512. In the case of a single torn write, these failures should
not have been reported, and the database should be recoverable once the server
is upgraded to include this fix. If upgrading the server does not resolve
the assertion failures, the database is likely corrupt and does not just
have a torn write.
================(Build #3887 - Engineering Case #570468)================
When attempting to create a proxy table, if the connection was subsequently
cancelled, either explicitly or implicitly, by shutting down the database
or server, then there was a small chance the server would have crashed. This
problem has now been fixed.
================(Build #3887 - Engineering Case #569931)================
If a user-defined function using the Transact-SQL dialect issued a RAISERROR
statement, and that function was called from another Transact-SQL function,
the calling application could have failed to receive the error. In some cases,
this would result in the application hanging. This has been fixed.
================(Build #3887 - Engineering Case #569314)================
The datepart() function could have returned an incorrect result for the CALWEEKOFYEAR
and CALYEAROFWEEK date parts for dates greater than December 29 of years
for which January 1 of the following year falls on Tuesday, Wednesday or
Thursday. The first week of a year is defined as the week containing the
year's first Thursday. All days in that week should have the same CALWEEKOFYEAR
and CALYEAROFWEEK values, even for days in the previous year. This has been
fixed.
================(Build #3886 - Engineering Case #570140)================
Attempting to insert a binary or varbinary variable, or host variable, into
a Microsoft SQL Server proxy table with an image column would have failed
with an 'invalid precision' error when the length of the binary value was
between 8001 and 32768 bytes. This problem has now been fixed.
================(Build #3886 - Engineering Case #570094)================
The server would have crashed when parsing a procedure that contained a CASE
statement with a WHEN clause which had an unary search condition other than
ISNULL. When unparsing the WHEN clause the server was expected a binary search
condition and crashed when attempting to access the second operand. This
has been fixed.
================(Build #3886 - Engineering Case #569784)================
If a procedure definition did not contain a RESULT clause, and returned its
results from a SELECT statement with a TOP n clause that used variables,
describing a query of the form:
SELECT *
FROM proc()
could have been incorrectly described by the server as having a single column
with the name "expression". The result set returned by the statement
would have had the correct schema. This has been fixed.
Calling the procedure directly (i.e. call proc()) was not affected by this
problem, and would be correctly described, so this statement can be used
as a workaround. Describing the schema of the result set using a RESULT clause
is also a workaround. To fix procedures created with older servers, an ALTER
PROCEDURE <proc> RECOMPILE has to be executed for each such procedure.
================(Build #3886 - Engineering Case #568976)================
In rare, timing dependent circumstances, the server may have hung when executing
queries using intra-query parallelism. This has been fixed.
A workaround is to disable intra-query parallelism by setting the max_query_tasks
option to 1.
================(Build #3886 - Engineering Case #568421)================
Remote server capabilities were accidentally set OFF during a rebuild of
a case-sensitive database. This has been fixed.
================(Build #3885 - Engineering Case #570308)================
Attempting to return a result set from within a BEGIN ATOMIC block would
caused a memory leak in the server. This has been fixed.
Note that returning a result set from within an atomic block is not allowed,
and an error is issued in this case. This behaviour has not changed.
================(Build #3885 - Engineering Case #569942)================
If an application that was connected using Open Client executed a query that
generated a large column name, then there was a chance the application would
have exited with a TDS protocol error. This problem has now been fixed.
================(Build #3884 - Engineering Case #571436)================
A statement that used subsumed ORDER BY clauses in aggregate functions would
have failed with a syntax error. This type of statement was executed without
an error in version 9.0.2, and has now been fixed.
For example:
select list(e1 ORDER BY e2, e3 ), list( e4 ORDER BY e2,e3,e5)
from ....
The first ORDER BY clause 'e2, e3' is subsumed by the second ORDER BY clause
'e2,e3,e5'.
================(Build #3884 - Engineering Case #569330)================
The server could have crashed, or failed an assertion, if a materialized
view had been disabled soon after an update. This has been fixed.
================(Build #3884 - Engineering Case #569127)================
The result set returned from calling the system procedure dbo.sp_jdbc_primarykey()
may have contained more rows than it should. This problem was introduced
by the changes made for Engineering case 531119, and has now been fixed.
================(Build #3884 - Engineering Case #568663)================
The server could have crashed during Application Profiling if any variables
or host variables were present in the workload. This has been fixed.
================(Build #3884 - Engineering Case #565232)================
In very rare circumstances, if a complex view which cannot be unflattened
(e.g., a grouped view) was used multiple times in the same statement, the
optimizer may have generated an access plan which computes an incorrect result
set. One of the conditions for this to have occurred was for a predicate
of the form " V.X theta [ANY|ALL] ( select aggregate(e) from V as
V1 where ... )" to be present in one of the query blocks of the statement.
This has been fixed.
================(Build #3883 - Engineering Case #569307)================
A SQLAnywhere service requiring authorization returned realm information
based on the request URL. This has been fixed. The realm is now always set
to the database name (or its alias) regardless of the request URL.
================(Build #3883 - Engineering Case #569298)================
In some cases, it was possible for recursive UNION queries to give incorrect
results. This has been fixed.
================(Build #3883 - Engineering Case #567543)================
If the WHERE clause of a query block contained many predicates using host
variables, it was possible that the PREPARE or OPEN times would have been
unnecessarily long. The server was not recognizing the two expressions with
host variables were the same. his has been fixed.
================(Build #3882 - Engineering Case #568991)================
Connecting to the server using Open Client and attempting to describe the
input parameters of a dynamic statement, would very likely have cause the
application to hang. This problem has now been fixed.
================(Build #3882 - Engineering Case #568233)================
If a query had multiple uses of a function such as now() and one of the uses
was within a COALESCE expression, then it was possible for the query to unexpectedly
give different values for the two uses. For example, the following query
could return different values for time1 and time2:
create temporary function T_W( x int )
returns timestamp
begin
waitfor delay '00:00:00.500';
return NULL;
end
select coalesce( T_W(0), now() ) time1, coalesce( T_W(1), now() ) time2
In addition to now(), the following expressions also had this behaviour:
getdate()
today()
current date
current time
current timestamp
current utc timestamp
In addition to coalesce() expressions, the same problem could occur with
the now() or related function used within these expressions:
ISNULL()
IFNULL()
IF
CASE
ARGN
This has been fixed.
================(Build #3881 - Engineering Case #568660)================
Referential integrity constraints (i.e. primary keys, foreign keys or unique
constraints) involving long values may have resulted in unexpected errors.
This problem has now been fixed.
================(Build #3881 - Engineering Case #568645)================
On very rare occasions, the server may have failed to start on Solaris systems.
This has been fixed.
================(Build #3881 - Engineering Case #568641)================
Attempting to insert a large long binary value into a MicrosoftS SQL Server
proxy table, would have failed with a "wrong precision" error.
This problem was introduced by the fix for Engineering case 565651, and has
now been fixed.
================(Build #3881 - Engineering Case #568468)================
The SQL Anywhere server supports the "UPDATE( colname )" condition
that can be evaluated in statement-level trigger code to determine if the
value of the specified column has been modified from its original value in
any of the rows in the row set affected by the UPDATE/INSERT/MERGE statement
being executed. The server would have failed to evaluate the condition correctly
for multi-row sets under certain circumstances. This has been corrected so
that the server will now evaluate the condition correctly.
================(Build #3879 - Engineering Case #568217)================
On Mac OS X systems, an archive restore could have failed with an "insufficient
space" error, even when enough space to complete the restore was available.
This has been fixed.
================(Build #3877 - Engineering Case #563533)================
Transactions could have failed with the error SQLSTATE_PREEMPTED being incorrectly
returned to the application.
================(Build #3876 - Engineering Case #567427)================
If a transaction running under snapshot isolation attempted to update or
delete a row that had been deleted and committed by another transaction,
the update or delete would have failed with the wrong error. Typically the
error would have been "Unable to find in index 't' for table 't' (SQLCODE:
-189; SQLSTATE: WI005)", or it could have failed silently. This has
now been fixed.
================(Build #3876 - Engineering Case #567347)================
If an application connected to a server via Open Client, using a newer version
of Open Client 15, then it was likely that cancelling a request would have
given a protocol error on the next request to the server. This problem has
now been fixed.
================(Build #3874 - Engineering Case #567163)================
If an application attempted to start an external environment on a server
that was busy stopping or starting external environments for other connections,
there was a chance the server would have returned a thread deadlock or 'main
communication thread not found' error. There was also a chance, although
small, that the server would have crashed in this situation. This problem
has now been fixed.
================(Build #3874 - Engineering Case #567157)================
In very rare circumstances, the server could have crashed during startup
while recovering a database that used a mirror transaction log file. This
has been fixed.
================(Build #3874 - Engineering Case #566693)================
On certain processors other than x86 and x86_64 (64 bit HP-UX for example),
the server may have crashed in extremely rare conditions when using a connection
number to get connection information. Examples of getting this type of connection
information include getting a connection property for another user, or calling
sa_conn_info. This has now been fixed.
================(Build #3874 - Engineering Case #562627)================
SQL Anywhere supports the "UPDATE( colname )" condition that can
be evaluated by trigger code to determine if the value of the specified column
has been modified from its current value in the row. The server was failing
to evaluate the condition correctly when the column value was modified internally
during statement execution. As one example, if the user statement did not
modify the column value, but a BEFORE trigger did, then the condition in
an AFTER trigger failed to evaluate to TRUE when it should. This has been
fixed so that the server will now evaluate the condition correctly, regardless
of when the column value is modified,
================(Build #3873 - Engineering Case #566805)================
Creating a string object shorter than, but within roughly a database page
size of, the 2^31-1 (2GB) upper length limit could have incorrectly resulted
in the SQL error MAX_STRING_LENGTH_EXCEEDED. This has been fixed.
================(Build #3873 - Engineering Case #566685)================
If a simple statement was processed by the server and bypassed the query
optimizer, it was possible for the server to choose an inefficient plan if
the statement contained a specific form of contradiction. For example, the
following statement could have generated a plan that would have read all
rows from TRange:
select * from TRange where x1 = 3 and x2 between 2 and 1
A more efficient plan would recognize that the BETWEEN predicate is never
true (2 > 1), and uses a prefilter to avoid fetching any rows from TRange.
This has been fixed so that the more efficient plan is selected.
================(Build #3873 - Engineering Case #566372)================
Data inserted into a compressed column may have caused a decompression error
when retrieved. This would only have occurred if the data was already compressed
so that compression would result in increased data length, the column was
created with the NO INDEX clause, and the data length was very close to a
multiple of 8 times the database page size. An error in calculating the maximum
possible length for the compressed data has been fixed.
================(Build #3873 - Engineering Case #566043)================
When working with .NET data sources and the OLEDB adapter using Visual Studio,
the Configure Dataset with Wizard dialog may have resulted in "Dynamic
SQL generation is not supported against a SelectCommand that does not return
any base table information." or theTableAdapter Query Configuration
Wizard dialog may have resulted in "Failed to get schema for this query".
The SQL Anywhere OLE DB provider has been corrected.
================(Build #3873 - Engineering Case #537126)================
When running on multiprocessor machines, statements with joins may have caused
a server crash in rare conditions. This is now fixed.
================(Build #3872 - Engineering Case #565624)================
Under certain non-deterministic conditions, a server-side backup could have
blocked other tasks, such as remote procedure calls, HTTP requests, and external
environments from running. As well, in certain very rare and timing sensitive
conditions, it is also possible that a backup could have hung indefinitely
while starting. Both of these problems have now been fixed.
================(Build #3871 - Engineering Case #565837)================
When using the INPUT statement with dbisqlc, column inputs may have failed
if the length of the input record exceeded the capacity of the input buffer.
This failure could have been signalled by a conversion error, or it could
have gone undetected (the remaining columns having been truncated or set
to the null value. This problem has now been fixed.
================(Build #3869 - Engineering Case #566165)================
In very rare circumstances, the server could have crashed if the procedure
debugger run a 'step into' or 'step over' request at the end of a procedure
or trigger. This has been fixed.
================(Build #3869 - Engineering Case #566046)================
If the maximum value in an autoincrement column after performing a LOAD TABLE
statement was a negative value, the SYSTABCOL.max_identity value for that
column would have been set to a negative value. This would have caused subsequent
inserts into the table, which did not provide a value for the autoincrement
column, to generate errors. This situation could have arisen when rebuilding
a database having a table with an autoincrement column and only negative
values in the column. Note that the use of negative values with autoincrement
columns is discouraged. This has been fixed so that the max_identity value
in this situation will now be set to zero after the LOAD TABLE statement
completes.
================(Build #3868 - Engineering Case #565835)================
The server may have crashed if the procedure debugger's connection, and the
debugged connection, stopped at the same time. This has been fixed.
================(Build #3868 - Engineering Case #565283)================
When evaluating predicates outside of DML statements (for example, in SET
or IF statements in procedures, triggers, events, or batches), the server
could have improperly treated UNKNOWN as FALSE. For example, the following
statement should set @str2 to NULL, but it was incorrectly being set to FALSE:
SET @str2 = if NULL like 'a' then 'TRUE' else 'FALSE' endif;
This has been fixed.
================(Build #3868 - Engineering Case #564277)================
If a simple statement was processed and bypassed the optimizer, it was possible
for the server to choose an index that was less efficient than the best one,
leading to poor performance. This problem would have occurred if the WHERE
clause contained equality and range predicates. This has been fixed.
A workaround can be achieved using the following command line switch on
the server start line or the "OPTIONS(FORCE OPTIMIZATION)" in the
query text.
-hW AllowBypassCosted
================(Build #3868 - Engineering Case #564025)================
Messages generated by the MESSAGE ... TO EVENT LOG statement were not printed
to the system event log when the server was started with the -qi switch.
This has been fixed.
================(Build #3868 - Engineering Case #557973)================
If an index already existed on a particular set of columns, attempting to
create a unique index over the same set of columns might have failed, if
the index contained deleted, but not cleaned, entries. This has been fixed.
Note, a work around would be to call sa_clean_database manually before creating
the index.
================(Build #3868 - Engineering Case #557812)================
If an UPDATE or DELETE statement used a keyset cursor and the statement executed
at isolation level 0 or 1, it was possible for the statement to update or
delete rows improperly. This could have occurred if another transaction deleted
a row identified by the UPDATE or DELETE statement and committed, and then
another transaction inserted a row and committed before the UPDATE or DELETE
statement had finished processing. In this case, the newly inserted row would
have been improperly processed by the UPDATE or DELETE statement. DELETE
and UPDATE statements use a keyset cursor if there is a possibility that
an updated row could be re-processed by the statement (for example if an
UPDATE modifies a column in the index scan, or if a join is present). This
has now been fixed.
A workaround would be to use a higher isolation level (2 or 3).
================(Build #3868 - Engineering Case #556111)================
In specific situations, it was possible for the server to crash when processing
a statement that contained a LIST aggregate function with an ORDER BY specification
inside of a subquery. This has been fixed.
================(Build #3868 - Engineering Case #523709)================
The server may have crashed when more than one Unload utility (dbunload)
was run concurrently with internal rebuild (ie. -an, -ar, -ac). This has
been fixed.
================(Build #3867 - Engineering Case #565474)================
An HTTP client procedure may have hung when receiving chunk mode encoded
data. For this problem to have occurred, the client needed to identify itself
as an HTTP/1.1 version client (by default it identifies itself as HTTP/1.0).
This may be done in the following ways:
SET 'HTTP(VER=1.1)'
TYPE 'HTTP:POST' and length of the values for input parameter equal or
exceed 8192 bytes
TYPE 'HTTP:POST' SET 'HTTP(CH=ON)'
This has been fixed.
================(Build #3867 - Engineering Case #392470)================
When the string_rtruncation option is on, the error "Right truncation
of string data" (SQLCODE -638, SQLSTATE 22001) was not being given when
converting from numbers to binary. This has been corrected so that when the
option is on, this error is now given in the following cases:
- REAL values converted to BINARY(3) or shorter
- DOUBLE values converted to BINARY(3) or shorter
- exact numerics converted to BINARY where a leading byte is truncated and
the byte value is not zero
When converting exact numerics (bit, tinyint, smallint, unsigned smallint,
int, unsigned int, bigint, unsigned bigint, numeric) to binary, the numeric
value is first converted to one of the following types: INT, UNSIGNED INT,
BIGINT, UNSIGNED BIGINT. If the target binary is smaller than 4 bytes for
INT/UNSIGNED INT or smaller than 8 bytes for BIGINT/UNSIGNED BIGINT, then
the most significant bytes of the value are truncated. For example:
SELECT CAST( 0x12345678 AS INT ) ival,
CAST( ival AS BINARY(3) ) bval
returns 0x345678 for bval. After this change, an error is raised if one
of the truncated bytes is non-zero and the string_rtruncation option is on.
Further, when converting a too-long binary string to a number, errors are
generated as follows:
- for REAL, if the binary string is not 4 bytes long
- for DOUBLE, if the binary string is not 8 bytes long
- for BIGINT/UNSIGNED BIGINT, if the binary string is longer than 8 bytes
and a non-zero byte in the prefix would be truncated
- for INT/UNSIGNED INT, if the binary string is longer than 4 bytes and
a non-zero byte in the prefix would be truncated
For example, the following generate errors:
CAST( 0x123456 AS REAL )
CAST( 0x1234567800 AS REAL )
CAST( 0x1234567800 AS INT )
but, the following do not generate errors:
CAST( 0x12345678 AS REAL )
CAST( 0x0012345678 AS INT )
CAST( 0x0000000000123456789abcdef0 as UNSIGNED BIGINT )
================(Build #3866 - Engineering Case #565286)================
The server could have released locks on rows prematurely. Data corruption,
crashes, unexplained deadlocks and incorrect query results were all possible.
For this to have occurred though, there must have been a significant amount
of contention for a particular row. This has now been corrected.
================(Build #3866 - Engineering Case #565244)================
The SQL Anywhere HTTP server will generate error responses that may be classified
into two categories: System and User error messages. System error messages
are generated for the following conditions: protocol errors, time-out and
cancel. User error messages are generally caused by a SQL error. It is
recommended that the application handle SQL errors within an EXCEPTION clause
and render application specific error responses. By default a User error
message is output in the body of the response as HTML or plain-text when
the SERVICE is configured as TYPE 'HTML' or 'RAW' respectively. User error
messages may have been returned using chunk-mode transfer encoding, while
keeping the connection alive. In the event that the web service encountered
an error when outputting its response, or was explicitly canceled by dropping
its database connection, the response message was prematurely truncated.
A change has been made to make the default behaviour more consistent, SQL
errors explicitly handled by the application are not affected by these changes.
By default System and User error messages:
- do not use chunk mode transfer encoding
- explicitly set 'Connection: close' response header
- shutdown the HTTP connection once the error message is sent
Any pending (pipelined) requests following the request encountering the
error are terminated. Also, an error response is guaranteed to close the
HTTP connection. Interrupting a response (already underway such that the
response headers have already been sent) will truncate the output and close
the HTTP connection. User error messages that are explicitly caught by the
EXCEPTION clause may also CALL SA_SET_HTTP_HEADER('Connection', 'close')
prior to issuing the error page to force the HTTP connection to close after
the response has been sent.
================(Build #3864 - Engineering Case #405704)================
A statement that contained a subselect with more than one column, should
have generated the error: "Subquery allowed only one select list item"
(SQLCODE -151, SQLSTATE 53023). If the subselect was coded using a single
'*' or 'T.*' in the select list, the error was not given. For example, the
following should give an error:
select *
from dummy
where dummy_col = (select * from department where dept_id < 200
)
When the error was not given though, the extra columns were ignored. This
has been fixed.
================(Build #3863 - Engineering Case #564828)================
Following a cursor OPEN for a simple select statement that bypasses optimization,
the SQLCOUNT value could have been incorrectly set to 0, when the number
of rows was to be exact (ie. Row_counts was 'on'), and the number of rows
was estimated to be 0 . This has been fixed so that the correct setting
is now -1 in these cases.
================(Build #3863 - Engineering Case #564677)================
A failed COMMENT ON PROCEDURE statement could have prevented the procedure
from being dropped. This has been fixed.
Note, a workaround is to stop and restart the database before attempting
to drop the procedure.
================(Build #3861 - Engineering Case #564066)================
When an HTTP connection that had made an external environment call was closed
at the exact same time that the external environment exited, there was a
chance that the server would have crashed. This problem has now been fixed.
================(Build #3861 - Engineering Case #564046)================
The changes for Engineering case 559413 introduced a problem where attempting
to cancel a Java call could have caused the Jave VM to stop responding. This
has now been fixed.
================(Build #3861 - Engineering Case #564010)================
The server may have crashed if a floating point value was converted to a
string, and then subsequently cast to a string with a too small size. This
has been fixed.
================(Build #3861 - Engineering Case #563545)================
When executing several remote queries, if the remote queries contained NChar
variables in the WHERE clause, then some of the queries may have returned
an incorrect result. This problem has now been fixed.
================(Build #3860 - Engineering Case #386187)================
If a LIKE predicate was used on a LONG BINARY value that was longer than
would fit in a VARCHAR (32767), and the option String_rtruncation was set
to 'on', then a truncation error could inappropriately have been given. This
has been fixed.
The following example demostrates the error:
create table patterns( c1 long binary );
insert into patterns values( repeat('a', 50000);
select count(*) from patterns where c1 like '%aaa%'
The following error was returned: "Right truncation of string data"
SQLCODE: -638 SQLSTATE: 22001
================(Build #3858 - Engineering Case #562829)================
When querying a proxy table mapped to a remote ASE or Microsoft SQL Server
table, if the remote table had a varchar(255) column, then fetching data
from that column would have resulted in data truncation. This was due to
an off-by-one error, which has now been corrected.
================(Build #3858 - Engineering Case #454745)================
In some cases, an INSERT, UPDATE, or DELETE statement would have used a work
table that was not needed. This would have caused performance to be slower
than it could otherwise have been. This has been fixed.
================(Build #3857 - Engineering Case #562826)================
When backing up a database with no transaction log, a client-side transaction-log
only backup (i.e. using the Backup utility) would have caused the server
to crash. A transaction-log only server side backup (i.e. using the BACKUP
statement) did not cause a crash. Although a server side backup did not cause
a crash, the SQL error that was issued in this case gave no indication as
to what the failure actually was, i.e., it reported "Syntax error near
'backup option'". As well as fixing the crash, a more useful SQL error
code/message for the server-side case is now displayed: "Error during
backup/restore: No files are part of this backup".
================(Build #3856 - Engineering Case #562838)================
Applications using the Broadcast Repeater utility were not able to find servers
running on Linux machines on a different subnet using broadcasts. The server's
broadcast response was being malformed. This has now been corrected.
================(Build #3856 - Engineering Case #562535)================
The server may have crashed when trying to build a plan containing a hash
filter. This has been fixed.
================(Build #3856 - Engineering Case #562534)================
Executing a query of the form "select ... into #temp ...", or a
query with a stored procedure in the FROM clause, may have caused the server
to crash. This would have occurred if the statement contained a CONVERT or
CAST function call to a user-defined type, or a WITH( column-name <user-defined
type>, ...) clause. This has been fixed.
================(Build #3856 - Engineering Case #536347)================
Executing a LOAD TABLE statement with the clause CHECK CONSTRAINTS OFF, may
have failed if the table being loaded had publications. This has been fixed.
================(Build #3855 - Engineering Case #562656)================
Attempting to fetch data from a proxy table, when connected using a non-DBA
user, would have failed if the remote server was one of SAJDBC or ASEJDBC.
A permissions problem on the global temporary table used to hold the rows
fetched from the remote server has now been fixed in newly created databases.
For existing databases, log in with a DBA user and execute the following
statement:
grant select,insert,update,delete on dbo.omni_rset_fetch_table to PUBLIC
================(Build #3854 - Engineering Case #562414)================
In exceptionally rare circumstances, the server could have hung during startup.
This hang would only have occurred when the server was run on a multi-processor
machine. This has been fixed.
================(Build #3853 - Engineering Case #560935)================
Restoring a database from a backup archive using the RESTORE DATABASE command
with the RENAME option could have corrupted the transaction log associated
with the restored database. Translating the transaction log file using the
dbtran.exe utility would have resulted in an error indicating that the log
was missing a CONNECT operation. This has been fixed.
================(Build #3850 - Engineering Case #561553)================
When a query of the form "SELECT f(c) from t where t.c < 5",
where f() was a user-defined function and t was a proxy table, was executed
the server would have attempted to push the "where c < 5" to
the remote server in order to reduce the number of rows being returned. Unfortunately,
due to the fix for Engineering case 555959, this behaviour was changed such
that the WHERE clause was no longer getting pushed to the remote, resulting
in more rows being fetched from the remote than necessary. Note that the
fix for case 555959 resulted in a performance degradation only, the resulting
result set was still correct. Nevertheless, this has now been resolved and
the WHERE clause will now properly be pushed to the remote server when it
is safe to do so.
================(Build #3850 - Engineering Case #559192)================
If an application fetched columns of type NUMERIC, DATE, TIME, or TIMESTAMP,
and the column was bound as a string in the client application, then performance
could have decreased. The slowdown was most apparent in statements where
a large number of rows were fetched with relatively little server processing
per row. This has now been fixed.
================(Build #3850 - Engineering Case #557465)================
In exceptionally rare circumstances, the server may have crashed during the
rewrite phase of optimization, when attempting to execute a very large and
complex SELECT statement. This may have occurred when the server was close
to running ouf of stack space, or was low on cache. The problem was seen
with queries containing UNION, EXCEPT, INTERSECT, and deeply nested subselects.
This has been fixed. The server will now correctly return the error "-890:
Statement size or complexity exceeds server limits".
================(Build #3847 - Engineering Case #560678)================
If a database's CHAR collation was a tailored UCA collation with a sorttype
specified, then comparisons for catalog strings (such as table names and
user names) would have incorrectly ignored the sorttype. For example, the
Swedish collation UCA(locale=swe; sorttype=phonebook) considers 'v' and 'w'
to be different characters at the primary level; however, those letters would
have been considered equal during catalog string comparisons, as if the catalog
collation were UCA(locale=swe) with no sorttype specified. This problem has
been fixed.
================(Build #3846 - Engineering Case #560107)================
Table names in case insensitive databases are required to be unique under
case insensitive comparisons, e.g., names FOO and foo refer to the same table.
In some cases, the server may have allowed multiple tables with the same
name that differed only in case to be created. This has been fixed so that
the server will now generate the expected error.
Note, once a database contains multiple tables with the same names, all
variations of the name will refer to the same (somewhat non-deterministic)
instance of the table. The situation can be corrected by dropping and recreating
the tables. Any existing data needs to be saved and restored as necessary.
================(Build #3846 - Engineering Case #560080)================
If a user-defined function was created with the DETERMINISTIC keyword, the
parsed version of the CREATE FUNCTION sstatement that was placed in the catalog
did not contain this keyword. The function may not then have been treated
as deterministic, and if it was unloaded, the CREATE FUNCTION statement would
also not have contained the keyword. This has been fixed.
Note, to fix existing user-defined functions created with DETERMINISTIC,
the function will need to be recreated.
================(Build #3846 - Engineering Case #560056)================
If a connection was forcibly closed by the server (via DROP CONNECTION, liveness
or idle timeout), or was closed by the client libraries because of a liveness
timeout, the client application could have crashed if the next operation
it attempted was to open a cursor
without preparing it. This has been fixed.
================(Build #3846 - Engineering Case #559898)================
If an application made use of one of the Remote Data Access JDBC classes,
and the application then disconnected abnormally, then the connection to
the remote server was never closed. The problem did not exist if the application
used one of the Remote Data Access ODBC classes instead. This problem has
now been fixed.
================(Build #3846 - Engineering Case #559822)================
If the optimizer selected a parallel execution plan for a query with a GROUP
BY operator on the right hand side of a nested loops join, and the group
by contained an AVG or other composite aggregate function, then it was possible
for the statement to incorrectly generate the error: "Field unexpected
during compilation". In this case the server would have continue to
process other requests. This problem has been fixed.
A workaround is to disable parallel plans by setting Max_query_tasks=1 as
a connection option, or in the query OPTIONS clause.
================(Build #3846 - Engineering Case #559810)================
If an INSERT statement inserted a string into a CHAR or VARCHAR column with
character-length semantics, then it was possible for the server to fail with
the following assertion failure:
100914: string too long when recording operation in transaction log.
In order for the failure to occur, the inserted string must have had byte-length
semantics and it must have contained more characters than the column definition.
Further specific characteristics of the database and statement were needed
to reproduce the problem. This problem has been fixed.
================(Build #3846 - Engineering Case #556175)================
Queries with unflattenable subqueries may have returned incorrect results,
or crashed the server. The following conditions must have been true for
the incorrect result set to have been, or for the server to have crashed:
- the query contained a subquery predicate (e.g., EXISTS() , NOT EXISTS())
which could not be flattened in the main query block, which was used in the
cost-based optimization by the SA Optimizer
- the subquery predicate was correlated to at least two tables from the
main query block
- a correlation expression was equated to a constant in the main query
block
For example:
select *
from f, i, fi
where f.fund_id = 1 <==== f.fund_id is a correlation expression for
the NOT EXISTS() predicate and it is equated to a constant
and f.fund_id = fi.fund_id
and fi.investor_id=i.investor_id
and i.investor_id not in ( select ib.investor_id <=== i.investor_id
is a correlation expression for the NOT EXISTS() predicate
from ba , ib
where f.fund_id = ba.fund_id <==== f.fund_id is a correlation
expression for the NOT EXISTS() predicate
and ib.bank_account_id=ba.account_id
)
================(Build #3845 - Engineering Case #542356)================
A query with a GROUP BY clause that referenced the same alias at least twice,
would have incorrectly returned a syntax error.
For example:
SELECT item = 'abc'
FROM product p LEFT OUTER JOIN sales_order_items
GROUP BY
item,
item
This has been now been corrected.
================(Build #3844 - Engineering Case #559413)================
If many connections made concurrent external environment requests, and some
of these requests failed due to thread deadlock, then attempting to close
one of these connections may have caused the client to hang. This problem
has now been fixed.
================(Build #3843 - Engineering Case #558446)================
Executing an insert statement of the form: 'INSERT INTO <remote table>
ON EXISTING UPDATE SELECT * FROM <table>', would have caused the server
to correctly return an error; but, if the application attempted to execute
an insert statement of the form: 'INSERT INTO <table> ON EXISTING UPDATE
SELECT * FROM <remote table>', the server would have crashed. The problem
has now been fixed and the server now correctly returns an error in both
cases.
================(Build #3843 - Engineering Case #556123)================
Under certain circumstances, use of the system procedure sa_row_generator()
could have caused the server to crash, or enter an infinite loop. This problem
has been fixed.
Note that the result set of the stored procedure will be empty when the
specified values are not appropriate.
================(Build #3841 - Engineering Case #558756)================
Attempting to execute a query that referenced a view containing a remote
query, could have crashed the server. The crash would have occurred if the
remote query had both GROUP BY and HAVING clauses, and/or aliases. This problem
has now been fixed.
================(Build #3841 - Engineering Case #557332)================
If a string column used a UCA collation with accent or case sensitivity,
and the column appeared in an index with order DESC, then the server could
have returned incorrect answers for LIKE predicates on the column. Problematic
LIKE predicates started with a prefix of non-wildcard characters, such as
the following: "T.x LIKE '01234%'". This has been fixed.
================(Build #3840 - Engineering Case #557767)================
If an ALTER TABLE statement needed to rewrite the rows of the table (for
example, if a 17th nullable column is added) and the table contained long
or compressed strings, the operation could have taken much longer than necessary
and the database may have end up with many more free pages than before the
ALTER. The number of extra pages would have been approximately the number
of pages in the table in question. This has been fixed.
================(Build #3840 - Engineering Case #555940)================
Simple statements of the form "select TOP n from T order by T.X [asc|desc]"
may have had a very inefficient query access plan. This has been fixed.
================(Build #3839 - Engineering Case #557953)================
The Index Consultant may have caused the server to crash following query
optimization. The class of queries for which this could happen was characterized
by the existence of a comparison or EXISTS() subquery predicate in one of
the query blocks.
For example:
select * from sales_order_items soi1, sales_order_items soi2
where soi1.id = soi2.id
AND
soi1.quantity <
(
select avg( quantity ) from sales_order_items soi3
where soi3.prod_id = soi1.prod_id
);
This has now been fixed.
================(Build #3838 - Engineering Case #557802)================
In rare circumstances, server CPU usage could have been unnecessarily high.
The problem would only have occurred with certain layouts of memory within
a heap, and when certain heap operations were being performed on the heap.
This problem has been fixed.
================(Build #3837 - Engineering Case #557328)================
If an application logged on the database using a userid with multibyte characters,
and the application subsequently made a Java in the database call, then there
was a chance the Java VM would have failed to start if this was the first
time a Java call was made since the database was started. This problem has
now been fixed.
The workaround is to log in using a userid without non-multibyte characters
and execute a START JAVA statement. Once the Java VM is successfully started,
making Java calls using userids with multibyte characters will work fine.
================(Build #3835 - Engineering Case #557679)================
When running a 64-bit version of perfmon, the Adaptive Server Anywhere or
SQL Anywhere counters would not have been displayed if the 64-bit version
of the counters library (dbctrsX.dll) was registered and the 32-bit version
was not. This has been corrected.
Note, both versions of the counters library are registered during a normal
product installation so it is unlikely that users would encounter this problem.
================(Build #3835 - Engineering Case #556552)================
In certain cases, rebuilding a pre-10.0 database to version 10.0 or later
may have failed due to an incorrectly charset-converted database file-name.
The SQL error that was reported was:
"Cannot access file '<incorrectly-converted-file-name>.db'
-- The filename, directory name, or volume label syntax is incorrect."
This issue only occurred when using an OS charset whose label was not used
prior to version 10.0.0, such as "GBK". Additionally, for this
issue to have occurred, the database must have used a different charset than
the OS charset - specifically, the filename must contain characters that
require translation to be valid in the database charset. This has been fixed.
================(Build #3834 - Engineering Case #555959)================
If an application executed a remote query that involved a User Defined Function,
and the call to the function passed in column references as parameters, there
was a chance the server would have crashed when the server decided to run
the query in partial passthru mode. This problem has now been fixed.
================(Build #3833 - Engineering Case #555936)================
An attempt to drop a database whose page size was larger than the server's
page size, using the 'DROP DATABASE' statement, would have resulted in the
error "An attempt to delete database 'database file' failed", leaving
the reason for the failure unclear. The server will now raise the more specific
error, "Page size too big: 'database file'".
================(Build #3833 - Engineering Case #555808)================
For a query containing a WITH RECURSIVE table, if the estimated cardinality
of the recursive table after the first iteration was lower than 2 rows, the
optimizer would not have set the alternative JNL operator if an alternative
index existed. The resulting execution plans may have been very inefficient,
depending of the number of rows on the right hand side of the JoinHashRecursive
operator. This has been fixed.
Example:
with recursive Ancestor( child) as
(select row_num from rowgenerator R1 where R1.row_num = 1 )
UNION ALL
(select R2.row_num+100
from Ancestor, rowgenerator R2
where Ancestor.child = R2.row_num
) )
select * from Ancestor
Ancestor(child) has exactly one row after the first query block is computed.
Computing the second query block is very efficient if the executed plan is
"Ancestor<seq> JNL R2<row_num>". However, because the
optimizer did not set up the JNL operator, the inefficient plan "Ancestor<seq>
JH* R2<seq>" was executed.
================(Build #3833 - Engineering Case #555769)================
If an application, or set of applications, constantly connected, made a Java
call, and then subsequently disconnected, then there was a chance the Java
VM used by the server would have thrown an 'out of memory' exception. This
problem has been fixed.
================(Build #3833 - Engineering Case #555765)================
There was a very small chance that the server would have returned a thread
deadlock error on an external environment call in situations where no actual
thread deadlock occurred. For this problem to have occurred, a previous valid
thread deadlock must have been returned by the server, resulting from a large
number of connections attempting to issue an external environment call at
the same time. This problem has now been fixed.
================(Build #3832 - Engineering Case #554889)================
Simple SELECT, UPDATE, and DELETE statements with a specific predicate structure
could have caused the server to crash. A workaround to avoid the crash is
to set the option Max_plans_cached=0. This problem has been fixed.
================(Build #3831 - Engineering Case #555623)================
The following fixes have been made to graphical plan descriptions:
1. Range expressions of a partial index scan are now always printed as "low
expression <[=] column name <[=] high expression [ASC |DESC]".
The low and high expressions can be NULL value if the real fence post was
set to NULL value; they can be '*' if the real fence post is not set (this
is actually an open range partial index scan).
2. Selectivity estimations are now printed with 9 decimal digits.
================(Build #3831 - Engineering Case #555622)================
Lower bounds are imposed on selectivity and cardinality estimates during
the optimization process. These bounds are meant to be greater than 0, but
less than one row of the result set of a physical operator, and are set to
be a very small percentage, which in the case of very big tables, or big
intermediate result sets (> 10,000,000 rows), were actually bigger than
one row. This has been fixed.
================(Build #3831 - Engineering Case #555390)================
If the option PUBLIC.quoted_identifier was set to 'off', executing the reload
script produced by Unload utility (dbunload) would have failed when attempting
to set subsequent options. This has been fixed.
================(Build #3831 - Engineering Case #555228)================
Under some circumstances the server could have crashed when a table was dropped.
This has bben fixed.
================(Build #3831 - Engineering Case #549085)================
A heavily-loaded server could have hang while running diagnostic tracing
with a high detail level. This has been fixed.
================(Build #3830 - Engineering Case #554886)================
Some particular constructions of the LIST aggregate could have caused the
server to crash while processing the statement containing the LIST aggregate.
In order for the crash to occur, the LIST must have contained an ORDER BY
clause. This has been fixed.
================(Build #3822 - Engineering Case #553469)================
Backups done where the checkpoint log was copied (WITH CHECKPOINT LOG COPY),
did not mark the dbspaces as active. This allowed databases that possibly
required recovery to be started in read-only mode. If this was done, it
could have lead to assertion failures as the server tried to flush pages
that had been dirtied. This has been fixed.
================(Build #3821 - Engineering Case #548870)================
A database could have become corrupted when deleting rows containing long
string (or binary) values that were indexed. The server then may have crashed,
or failed an assertion, when attempting to read rows from the table at a
later time. The server would likely have crashed during full validation of
a table corrupted in this manner. This has been fixed. Dropping and re-creating
the index should be a valid workaround.
================(Build #3820 - Engineering Case #553752)================
If a CREATE SYNCHRONIZATION SUBSCRIPTION statement was executed with an OPTION
clause that gets the option value from a variable (e.g. OPTION sv=@Var) and
the variable was set to null, then the server may have crashed, or inserted
a random option value into the catalogs. This has been fixed so that the
server now returns a sql error (SQLSTATE_BAD_SYNC_OPTION_VALUE) "Synchronization
option '%1' contains semi-colon, equal sign, curly brace, or is null".
================(Build #3820 - Engineering Case #543478)================
In very rare circumstances, an auto-shutdown of a database could have caused
the server to crash, if the server was in the process of shrinking the cache
at the same time. This problem has been fixed.
================(Build #3819 - Engineering Case #553720)================
If an HTTP/1.1 client made a request on a keep-alive connection, and then
after receiving the response, did not send another full request over the
connection for one minute, the connection would have timed out and sent a
408 Request Timeout status message. This is consistent with the HTTP/1.1
specification, but if the HTTP client was Apache, it would have been confused
by the error message. The behaviour has been changed so that the error status
code is suppressed (and the connection is simply closed) if all of the following
are true:
- A complete request and response have previously been sent on the connection
- No data has been received on the connection since the most recent response
was sent
- The keep-alive timeout has expired
================(Build #3819 - Engineering Case #553693)================
If a server had many external environment requests active, and several of
these external environment requests were subsequently cancelled at the same
time, there was a chance the server would have crashed. There was a very
small window where a cancel request could have been ignored by the external
connection. This has now been fixed.
================(Build #3819 - Engineering Case #553687)================
Tables storing CHAR and NCHAR values longer than approximately 126 bytes,
may have used two bytes of storage more per value than was necessary. If
the column's PREFIX value was smaller than 126, then strings longer than
the specified prefix value would have used two extra unnecessary bytes as
well. This has been fixed.
Note, rebuilding the database after applying this fix will remove all of
the unneeded bytes.
================(Build #3818 - Engineering Case #552595)================
In exceptionally rare conditions, typically while the server was under heavy
load or processing many backups on many databases simultaneously, a backup
could hang and the CPU usage of the server could have gone to 100%. This
has been fixed.
A workaround is to use a higher value on the -gn command line switch.
================(Build #3818 - Engineering Case #552311)================
If a SELECT statement contained a procedure call with arguments that contained
column references to other tables in the from clause, the server may have
return a "Column not found" error for these column references.
This would have occurred when the query rewrite process can remove tables
from the query.
For example:
create table T1( a int primary key )
create table T2( a int primary key, c int )
create procedure P1 ( u int ) begin select a from T1 where a = u;
end
select dt.* from T1 left outer join T2 on T1.a = T2.a, lateral( P1(T2.c)
) dt
For every row in T1 the left outer join can return at most one row because
of the join on primary keys. So the query rewrite removed the left outer
join and the table T2 from the query which caused a "Column T2.c not
found" error for the procedure argument. This has been fixed.
================(Build #3817 - Engineering Case #553064)================
If an application accidentally defined a Remote Data Access server that used
the SAJDBC class and connected back to the same database as the local user
connection, then an error indicating that the server definition was circular
was not displayed. This problem is now fixed and an error message is now
displayed. Note that this problem does not affect SAODBC class remote servers.
================(Build #3817 - Engineering Case #552760)================
If the default port number was already in use (eg. by another server), the
server may have failed to start TCP/IP, rather than choosing a different
port number. If the -z command line option was used, the error "TCP/IP
link, function bind, error code 10013" would have been displayed on
the server console. This would only have happened on Windows Server 2003,
and has now been fixed.
================(Build #3816 - Engineering Case #552653)================
Some situations, similar to those fixed for Engineering case 545904, allowed
the server to hang while concurrently updating rows containing blobs. This
has been fixed.
================(Build #3816 - Engineering Case #552620)================
After the server had started a database that required recovery, it could
have crashed when running a cleaner. This has now been fixed.
================(Build #3815 - Engineering Case #552648)================
Renaming a transaction log via a BACKUP statement could have failed as a
result of a transient sharing violation error on Windows. The error may have
been caused by a virus scanner or other software accessing the file as it
is being renamed. This has been fixed.
================(Build #3815 - Engineering Case #552587)================
If an application created a remote server and then accessed that remote server
using a different case than the one specified on the CREATE SERVER statement,
the server would have incorrectly opened additional connections to the remote
server. For example, suppose the application executed the following CREATE
SERVER statement:
CREATE SERVER MyServer CLASS ...
and then created two proxy tables as follows:
CREATE EXISTING TABLE table1 AT 'myserver...'
CREATE EXISTING TABLE table2 AT 'MYserver...'
If the application now queried "table1", the remote data access
layer would have correctly established a connection to the remote and returned
the requested data. If the application then queried "table2", the
data access layer should reuse the connection previously established to the
remote, but the server instead would have created a new connection to the
remote. This problem has now been fixed.
================(Build #3815 - Engineering Case #552488)================
The server could have failed assertion 201138 - "Attempt to allocate
page ... beyond end of dbspace". For the assertion to have occurred,
the database must have had an additional (non-primary) dbspace, and an operation
such as TRUNCATE, DROP TABLE, DROP INDEX, etc must have been performed on
an object in that dbspace. This has now been fixed.
================(Build #3814 - Engineering Case #552312)================
The changes made for Engineering case 541615 introduced a problem where attempting
to create a proxy table to a DB2 or Microsoft SQL Server table using Sybase
Central could have failed with a strange conversion error, instead of creating
the proxy table. This problem has now been corrected.
================(Build #3814 - Engineering Case #552302)================
If the HTTP listener was enabled, the server may have crashed on shutdown.
There is no risk of data corruption. This has been fixed.
================(Build #3813 - Engineering Case #552066)================
If a SQL Anywhere database used with Replication Server was unloaded using
the Unload utility (dbunload), some tables and procedures used by Replication
Server would not have been included. This has been fixed.
================(Build #3812 - Engineering Case #552189)================
The changes for Engineering case 545904 introduced a problem where the server
could have issued a variety of assertions, including "200610: Attempting
to normalize a non-continued row" while concurrently updating rows
containing blobs. For this to have occurred, the string values must have
been less than a page size, but larger than the column's inline amount. This
has been fixed.
================(Build #3812 - Engineering Case #552186)================
Truncating a table with deleted rows could have caused the server to fail
assertion 201501 - "Page for requested record not a table page or record
not present on page". For this to have occurred, the table must have
contained string data shorter than a page, and one of those short values
(which had to have come from one of the deleted rows) must have been held
active in an open cursor or variable. This has now been fixed.
================(Build #3812 - Engineering Case #552077)================
The server could have become deadlocked when frequently executing UPDATE
ststements on the same set of rows. This would only have occurred if the
table being updated had a non-unique index defined. This has now been fixed.
================(Build #3812 - Engineering Case #552057)================
In general, applications that use Java in the database support install the
classes and jars to be used into the database. Doing so allows the database
to be moved from one machine to another, or from one platform to another.
The other benefit of installing classes and jars into the database is that
the SQL Anywhere class loader can then be used to fetch the classes and resources
from the database allowing each connection that is using Java in the database
support to have its own instance of these classes and its own copy of static
variables within these classes. In VERY rare cases, it is beneficial to have
the system class loader load a class instead of the SQL Anywhere class loader.
The only real reason for having the system class loader load the class is
that statics within classes loaded by the system class loader can then be
shared across all connections using Java in the database support. There are
of course many reasons why the system class loader should not be used:
1) since statics are shared across all connections, there is an issue with
the security of data
2) mixing classes loaded by the SA class loader and the system class loader
can lead to the VM throwing IllegalAccess and ClassCast exceptions
3) there is now the danger that the system class loader will get used for
loading classes that should actually have been loaded by the SA class loader
Because of these potentially serious problems, it has always been STRONGLY
recommended that all classes and jars to be used for Java in the database
support be explicitly installed within the database. However, for those rare
cases where the class really needs to be loaded by the system class loader,
the server's classpath argument (-cp) can be used to add directories and
jars to the classpath that the server builds for launching the JVM. Unfortunately,
as of version 10.0, the server's classpath argument was being ignored when
launching the JVM. This problem has now been fixed and the server's classpath
argument is now properly appended to the classpath the server builds when
launching the VM. Again, the use of the server's classpath argument is STRONGLY
discouraged; instead, it is STRONGLY recommended that all classes and jars
to be used for Java in the database support be installed explicitly in the
database.
================(Build #3812 - Engineering Case #550103)================
Attempting to query a proxy table with a SELECT statement that used the FOR
XML RAW clause would have failed with an "invalid expression" error.
This problem has now been fixed.
================(Build #3811 - Engineering Case #551944)================
The fix for Engineering case 549983 introduced a problem such that two simultaneous
remote data access requests from two separate connections could have caused
the server to crash. This problem has now been fixed.
================(Build #3809 - Engineering Case #551750)================
Executing a Java stored procedure that attempted to fetch resources using
getResourceAsStream() or getResource(), would have had the fetch of the resource
fail. This problem has now been fixed.
Note that due to differences in compression schemes, it is strongly recommended
that jars containing textual resources be created without compression turned
on. For jars containing binary resources (ex. movies or images), using a
compressed jar will work fine.
================(Build #3809 - Engineering Case #551749)================
Connections that had executed a Java stored procedure would have taken three
seconds longer than necessary when attempting to disconnect. Connections
that were not used to make Java stored procedure calls are unaffected by
this problem. The problem has now been fixed.
Note that this problem was introduced in the changes made for Engineering
case 548322.
================(Build #3809 - Engineering Case #551747)================
Deadlocks could have occurred more often than expected at isolation level
3, if more than one transaction attempted to update the same row concurrently.
This has been corrected.
================(Build #3809 - Engineering Case #551739)================
When a TCP/IP connection abnormally disconnected from the server (for example
due to the DROP CONNECTION statement), there was a chance that a network
buffer was leaked. This has been fixed.
================(Build #3808 - Engineering Case #551690)================
If a request was made that was larger than the packet size, then, in rare
timing dependent cases, the connection could have hung until it was cancelled
or dropped. Examples of requests larger than the packet size include executing
a statement with long SQL text, or executing a statement with large host
parameters. This has now been fixed.
================(Build #3808 - Engineering Case #551611)================
If an application had uncommitted data in a local temporary table, and then
called a Java, or any other, external environment procedure, there was a
chance the server would have crashed if the application subsequently performed
a rollback later on after the Java or external environment procedure had
completed. This problem has now been fixed.
Note that this problem will not show up if the local temporary table is
declared with "on commit preserve rows" or "not transactional".
================(Build #3808 - Engineering Case #547496)================
A long-running HTTP connection to an OEM server would have resulted in an
authentication violation. This was corrected by making all HTTP connections
authenticated by default.
================(Build #3807 - Engineering Case #548625)================
When the -z command line option was used, or when request level logging was
enabled, the server could have generated thousands of message of the form:
"Internal warning: <x> dispatch took <y> seconds"
where x is an object name and y is a number. This would have affected the
Windows platform only, and not necessarily on all Windows machines. This
has now been fixed.
A workaround is turn off the -z option or request level logging.
================(Build #3807 - Engineering Case #547506)================
The server could have become unresponsive when executing a query, if during
an index scan very few rows satisfied the WHERE conditions. This has been
fixed.
================(Build #3806 - Engineering Case #551442)================
The changes for Engineering case 547228 did not fully eliminate the assertions
and other failures that could have occurred when committing deleted rows
containing short strings. This has been corrected.
================(Build #3805 - Engineering Case #551259)================
If a mirror server encountered a failed assertion, the primary server could
have hung when the mirror server was stopped. This has been fixed.
================(Build #3805 - Engineering Case #551112)================
If a database mirroring server encounters a failed assertion, the desired
behaviour is for the server to exit. This allows the mirroring partner server
to detect that a failure has occurred and take an appropriate action. After
a failed assertion on Windows, the server was exiting in such a way as to
cause Windows to display a dialog noting the abnormal exit. This prevented
the server from actually exiting until action was taken to clear the dialog,
and thus prevented the partner server from being notified. This has been
fixed.
In addition to this change, customers should consider configuring dbsupport
to ensure that it does not prompt when a failure occurs. For example:
dbsupport -cc autosubmit
or
dbsupport -cc no
A future version of the software may avoid the need to configure dbsupport
to prevent prompting in this situation.
================(Build #3805 - Engineering Case #550839)================
Dynamic cache size tuning was not enabled while databases were recovering
at server startup. Prior to 9.0.0, dynamic cache size tuning was enabled
during recovery. This has now been fixed.
================(Build #3804 - Engineering Case #550362)================
In a database mirroring system, if the transaction log on the primary server
was renamed via a backup, the mirror server could have reported various failed
assertions, for example:
100902, 100903: Unable to find table definition for table referenced in
transaction log
100904: Failed to redo a database operation
For this problem to have occurred, other connections must have been making
changes to the database at the time the log is renamed. The problem seemed
to occur more frequently if many old log files were present on the primary
server. The copy of the transaction log on the mirror server would have been
corrupted as a result of this problem. When the log was renamed, there was
a small window during which another connection could force a log page to
be sent to the mirror before the mirror was notified that the log was renamed.
The transaction log lock is now held across this window.
================(Build #3803 - Engineering Case #550841)================
Entity headers and a response body should not have been returned by a web
service having explicitly set a 'Not Modified' HTTP status via a call to
sa_set_http_header ( '@HttpStatus', '304' ). The following response headers
were being returned:
Transfer-Encoding: chunked
Expires: ... GMT
Content-Type:...
As an artifact of the chunked transfer encoding, a response body consisting
of a 0 chunk length was returned. This has been corrected so that a response
body and the above headers will not be sent by the server when a 304 'Not
Modified' HTTP status has been explicitly set.
================(Build #3803 - Engineering Case #550838)================
The server was not utilizing 100% of all CPUs with a workload that should
have been mostly CPU bound with interleaving I/O. This would likely have
occurred on lightly loaded servers. This has been fixed.
================(Build #3803 - Engineering Case #550803)================
When using the system procedure xp_sendmail(), if the message body contained
a period "." on a line by itself, text following that line would
have been removed from the message that was sent. A line containing a single
period is the end-of-message marker for the SMTP protocol. When sending a
line that begins with a single period, clients must precede it with another
period character, which xp_sendmail() was not doing. This has been fixed.
================(Build #3803 - Engineering Case #549396)================
Some arithmetic expressions in procedure statements could have been evaluated
with the incorrect result domain. This would have occurred when one of the
arguments had one of the following domains { BIT, TINYINT, UNSIGNED SMALLINT,
SMALLINT } and the other argument was either one of those domains or INT.
The result domain would have been INT. In some cases this could have lead
to a different result being returned than expected, for example if an overflow
would have occurred in the correct result domain. This problem has been fixed.
================(Build #3802 - Engineering Case #550729)================
The user-supplied comment clause provided by the WITH COMMENT clause that
may accompany the BACKUP statement was written to the backup.syb log in the
database character set, rather than the OS character set. All other information,
such as database name is written in OS charset. This has now been fixed.
================(Build #3802 - Engineering Case #550716)================
If an application updated a proxy table and then queried the TransactionStartTime
property, the value returned by the property would not have been properly
updated. This problem has now been fixed.
================(Build #3802 - Engineering Case #550694)================
A string ending with an incomplete multi-byte character may have had extra
characters appended to its escaped representation as generated by an UNLOAD
TABLE statement. This has been fixed.
================(Build #3802 - Engineering Case #550676)================
Attempting to construct a string value longer than the server's maximum string
length (2^31-1) would have resulted in silent trucation of string data.
For example, the statement:
select length(repeat('a', 2000000000) || repeat('b', 2000000000))
returns 2147483647 (i.e., 2^31-1) characters, but does not raise a SQL error
to indicate the operation failed and truncated the string data. This has
been fixed so that SQL Error -1313 MAX_STRING_LENGTH_EXCEEDED will now be
generated whenever an operation attempts to construct a string value longer
than the server's internal maximum string length.
================(Build #3802 - Engineering Case #550536)================
If an application was using Java in the database and the Java VM run out
of memory, then the Java VM would have remained alive even though Java in
the database requests could no longer be made. The Java VM will now exit
in this situation, and new Java in the database requests will now automatically
restart a new Java VM.
================(Build #3800 - Engineering Case #553103)================
During the optimization process, the optimizer creates and maintains order
properties for all physical operators placed in the access plans generated
as part of the search space generation process. These order properties are
used to decide if SORT physical operators are needed, for example to satisfy
an ORDER BY clause at the root of the access plan. Maintaining and tracking
order properties are expensive operations as described in [1]. For performance
reasons, the optimizer will now build and maintain only those order properties
needed by an interesting order property, such as the one imposed by an ORDER
BY clause. This change does not affect in anyway the performance of the order
optimization, it just makes the optimization process more efficient.
[1]“Database System with Methodology for Generalized Order Optimization”Matthew
Young-Lai, Anisoara Nica, 2007 US Patent 7,359,922.
================(Build #3800 - Engineering Case #550416)================
If an application executed a Java in the database stored procedure, and then
subsequently canceled the request, in very rare cases the server may have
crashed if the cancel request coincided with the Java request actually finishing.
This problem has now been fixed.
================(Build #3800 - Engineering Case #547213)================
The server could have looped forever executing a REORGANIZE TABLE statement.
This should only have occurred if the table had a clustered index that contained
non-unique values in the clustered column or columns. This has been fixed.
================(Build #3799 - Engineering Case #550252)================
If an application made a remote procedure call that contained output parameters
of type Float, and the remote server for that RPC was using one of the SAJDBC
or ASEJDBC classes, then the value of the Float output parameter would always
have been 0. This problem has now been fixed.
================(Build #3799 - Engineering Case #550246)================
If an application attempted to create a proxy table to a SQL Anywhere or
Micrsoft SQL Server remote server, and the remote table had an XML column,
then the server would have returned an undefined data type error. This problem
has now been fixed.
================(Build #3799 - Engineering Case #549940)================
If a transaction log was not completely written out to disk (e.g. during
a low disk space scenario), it was possible for the server to crash when
trying to apply a partial log operation during recovery. This has been fixed.
================(Build #3797 - Engineering Case #550116)================
In rare cases, a stored procedure call could have caused a server crash.
This has been fixed.
================(Build #3797 - Engineering Case #549999)================
If an Open Client application supports retrieving result sets with a large
number of columns, then attempting to perform a Kerberos login using such
an application would have failed with a protocol error. This problem has
now been fixed.
================(Build #3797 - Engineering Case #549983)================
If an application executed a query involving both proxy tables and local
tables, and the query had IN predicates that contained subqueries involving
proxy tables, then there was a chance executing the query would have caused
a server crash. This problem has now been fixed.
================(Build #3797 - Engineering Case #549967)================
A SELECT ... FOR XML ... over an NCHAR value could have caused a string right
truncation error to be generated.
For example:
SELECT * from table FOR XML AUTO
where table is defined with an NCHAR column, e.g.:
create table (col1 LONG NVARCHAR);
For the error to have occurred, the byte length of the NCHAR value must
have been greater than 32767, and the "string_rtruncation" database
option must have been set to "on" (which is the default).
This has been fixed.
================(Build #3797 - Engineering Case #549866)================
The server could have crashed, or failed assertions, when under heavy load
if pages were freed. The assertions were most likely to be about unexpected
page types or contents. This has now been fixed.
================(Build #3796 - Engineering Case #549846)================
In rare cases, monitoring of a heavily loaded server using the system procedure
sa_performance_diagnostics, could have caused a server crash. This has been
fixed.
================(Build #3796 - Engineering Case #547498)================
Outer references are expressions used in a nested query block which reference
table columns from the outside of that query block. For example, in the query
below, 'T.Z+1' is an expression used in a subquery referencing the base table
column T.Z of the base table T which is in the FROM clause of the main query
block. Such expressions are now sometimes considered constants inside the
nested query block. These constants are used in many optimizations by the
SA optimizer, such as order optimization, functional dependencies optimization,
and MIN/MAX optimization. Previously, these outer references are always treated
as non-constants.
Q:
select *
from T
where T.X <> (select max(R.Y) from R where R.Z = T.Z+1)
================(Build #3796 - Engineering Case #545353)================
The cardinality estimation of the table expression "P key join F",
where P is the primary key table and F is the foreign key table, was incorrectly
computed in certain cases for multi-column keys. This has been fixed. Now,
the cardinality estimation for this class of table expressions is "card(F)
\ #of rows in F with at least one NULL value for multi-column key".
Example:
ALTER TABLE FOREIGN KEY ( fk1, fk2, ..., fkn) references P (pk1, pk2,...,
pkn)
Q:
select * from
F, P
where F.fk1 = P.pk1 and F.fk2 = P.pk2 and ... and F.fkn = P.pkn
returns all rows from the foreign key table F less the rows having at least
one NULL for the foreign key columns F.fk1, F.fk2, ..., F.fkn.
================(Build #3795 - Engineering Case #549644)================
When running on Linux systems, a mini core could have been improperly generated
under rare circumstances.This has been fixed.
================(Build #3795 - Engineering Case #549622)================
A server running on a Linux system, may have hung when under heavy I/O load,
and a large number of concurrent request tasks (i.e. large -gn value). Specifically,
if -gn was larger than 250, then there was a chance a hang may have occurred.
This has been fixed. The workaround is reduce the -gn value.
================(Build #3794 - Engineering Case #549453)================
When manually unregistering the SQL Anywhere ODBC driver from the command
line using "regsvr32 -u dbodbcXX.dll", the unregistration process
may have failed and reported error code 0x8000ffff. Note that the failure
occurred after the user successfully acknowledged the prompt to allow dbelevate
(the "SQL Anywhere elevated operations agent") to run as an administrator.
This problem has been fixed.
As a work-around, run "regsvr32 -u dbodbcXX.dll" from a command
shell which is already elevated.
================(Build #3794 - Engineering Case #549424)================
A server running a tracing database could have crashed. This has been fixed.
================(Build #3794 - Engineering Case #547392)================
Database corruption was possible if a database crashed while a lazy checkpoint
was in progress. For corruption to occur, pages must have been allocated
during the lazy checkpoint and one of the following must have occurred prior
to the checkpoint:
- dropping a table or index
- truncating a table (that could have been truncated quickly, eg. no triggers)
- deleting or replacing long blobs (greater than roughly page size)
- [in general] an operation that resulted in pages being freed without the
contents being modified
This was more likely to have been an issue on heavily loaded servers. This
problem has been fixed by temporarily allocating the pages at the start of
the lazy checkpoint and then re-freeing them at the end.
================(Build #3792 - Engineering Case #548833)================
If a server was very busy, and several connections attempted to start external
environments at the same time, and if several of the start external environment
requests timed out, then, in very rare cases, the server could eventually
have become unresponsive. This problem has now been fixed.
================(Build #3791 - Engineering Case #548710)================
If recovery was attempted on a database that had grown since the last successful
checkpoint had been executed, some pages may have become unavailable for
reuse. This has now been fixed.
================(Build #3791 - Engineering Case #548626)================
If a table had a trigger defined that made an external environment call and
many connections attempted to access the table at the same time, forcing
table locks and simultaneous external environment calls, then there was a
chance the server would have hung. This problem has now been fixed.
================(Build #3791 - Engineering Case #548323)================
If many connections were making external environment (or Java) calls at the
same time, and the number of worker threads had not been increased by an
appropriate amount, then there was a possibility that the server would either
have hung or crashed. A thread deadlock error will now be returned instead.
================(Build #3790 - Engineering Case #548627)================
Remote TCP connection attempts to servers running on Unix systems with IPv6
enabled, may have failed with the error "Connection error: An error
occurred during the TCPIP connection attempt." This was only likely
to happen on machines that were ONLY using IPv6. This has been fixed. As
a workaround, the IPv6 address of the server machine can be specified using
the HOST parameter.
================(Build #3790 - Engineering Case #548470)================
If a mirror server had shut down and then very quickly restarted as a preferred
server, synchronization may have been delayed by thirty seconds. This has
been fixed.
================(Build #3790 - Engineering Case #548437)================
Under rare circumstances, a loss of quorum could have resulted in the database
file and the transaction log on the primary becoming out of sync. Subsequent
attempts to start the database would have failed with the error "Unable
to start specified database: Cannot use log file '<name>' since it
is shorter than expected". This has been fixed.
Note, the likelihood of this problem appearing with 11.0.0 servers was even
smaller than with 10.0.1 servers.
================(Build #3789 - Engineering Case #546882)================
The GrowLog system event that fires when the database transaction log grows
can be setup to truncate the transaction log upon its exceeding a certain
size. In cases where the database server was very busy, the transaction log
would not have been truncated often enough. This may have lead to the transaction
log getting significantly larger than the threshold set in the event. This
has been fixed, although in a very busy server it is still possible for the
log to grow larger than the threshold for short periods of time.
================(Build #3789 - Engineering Case #542016)================
When rebuilding a pre-10.0 database using the Unload utility (dbunload) with
the -an or -ar command line options, dbunload could have hung under very
rare conditions on certain Windows systems. This problem has only ever been
observed on a few machines configured as Domain Name Servers (DNS), but the
hang could have occurred under other conditions as well. This has been fixed.
================(Build #3788 - Engineering Case #545570)================
The server could have crashed while inserting rows to a table when also creating
statistics on the table. This has been fixed.
================(Build #3787 - Engineering Case #547708)================
Attempting to create a database with an apostrophe in a filename or the dba
user's password, could have failed with a syntax error. Also, attempting
to create a database with a dbkey containing a backslash may have resulted
in a database which could not be connected to. These problems have now been
fixed.
================(Build #3787 - Engineering Case #532314)================
If creating a tracing database resulted in an error, the real cause of the
error would have been missing from the error dialog details. This has been
fixed.
================(Build #3786 - Engineering Case #547513)================
When running the server on Unix systems and using the -m command line option
("truncate transaction log after checkpoint"), the transaction
log was not being truncated on checkpoints. This has been fixed.
================(Build #3786 - Engineering Case #547228)================
n very rare circumstances, the server could have failed a fatal assertion
when commiting deleted rows containing short strings (less than a database
page in length). The typical assertion seen in this instance was assertion
201501 - "Page for requested record not a table page or record not present
on page". This has been fixed.
================(Build #3786 - Engineering Case #547205)================
Renaming a column in a table having referential action triggers, could have
resulted in a server crash. This has been fixed.
================(Build #3786 - Engineering Case #546587)================
If the option Chained was set to off (i.e. auto-commit enabled), executing
an INSERT, UPDATE or DELETE statement inside a BEGIN ATOMIC block would have
resulted in error -267 "COMMIT/ROLLBACK not allowed within atomic operation".
This has been fixed. The DML statement will now be allowed to execute and
a commit (or rollback if an error occurs)will be performed automatically
at the end of the atomic block.
================(Build #3785 - Engineering Case #547248)================
If the server had shut down due to a start-up error involving a database
that participated in mirroring, the shutdown reason would not have been recorded
correctly. On Unix systems, it would have been recorded as being a result
of a SIGHUP signal. On Windows systems, it would have been recorded as being
a result of a request from the console. This has been fixed so that it is
now correctly recorded as being a result of a start-up error.
================(Build #3785 - Engineering Case #547198)================
Changes for Engineering case 536370 introduced a problem where simple select
statements could have caused a server crash for specific forms of table schema
and index definition. This has been fixed.
================(Build #3785 - Engineering Case #547076)================
Executing a "MESSAGE ... TO CLIENT FOR CONNECTION n" statement
could have resulted in a message with mangled characters in the message text.
For this to have occurred, the source connection and destination connection
must have been connected to databases with different collation sequences.
This has been fixed.
================(Build #3784 - Engineering Case #546432)================
The server may have hung while performing updates to rows containing blobs.
This has been fixed.
================(Build #3783 - Engineering Case #547036)================
The PacketsSent and PacketsReceived properties were being updated by HTTP
and HTTPS connections, even though the HTTP protocol has no real concept
of packets. This has been fixed by no longer updating these properties for
HTTP and HTTPS connections. The BytesSent and BytesReceived properties will
continue to be updated for HTTP and HTTPS connections.
================(Build #3782 - Engineering Case #545901)================
If an application called an external environment procedure immediately after
issuing a commit, and the external environment procedure performed server
side calls and issues its own commit, then there was a chance the server
will have failed assertion 201501 "Page for requested record not a table
page or record not present on page". This problem has now been fixed.
================(Build #3781 - Engineering Case #545933)================
Attempting to backup a database of size 5GB or larger with the clause "WAIT
BEFORE START" could have caused the server to hang. Backups of databases
this size and larger cause the server to calibrate the dbspaces, which is
done to improve the parallel performance of the backup. However, if the
calibration updated the catalog, then the WAIT BEFORE START clause would
have caused the backup to wait on itself. This has been fixed by turning
off calibration for large databases when the WAIT BEFORE START clause is
specified. If desired, the CALIBRATE DATABASE statement can be issued before
the backup begins.
A workaround is to run the backup without the WAIT BEFORE START clause.
================(Build #3780 - Engineering Case #546867)================
A partial index scan using an index with DESC columns, may have been inefficient.
For this problem to have occurred, the last column used to define the range
must have been in descending (DESC) order, and the index must have contained
NULLs. This has been fixed.
For example:
Previously, the server would have read approximately 85 index leaf and table
pages for the query below, now the number of pages read is approximately
10.
CREATE TABLE CURRENCY_TABLE
(CURRENCY CHAR(10) NOT NULL,
DOLLAR_EQUIV NUMERIC(5, 2),
PRIMARY KEY (CURRENCY));
INSERT INTO "CURRENCY_TABLE" VALUES ('DOLLAR', 1.00);
INSERT INTO "CURRENCY_TABLE" VALUES ('POUND', 1.91);
INSERT INTO "CURRENCY_TABLE" VALUES ('DM', .45);
INSERT INTO "CURRENCY_TABLE"
select 'DM', NULL from sa_rowgenerator(1,20000);
commit;
create index currency_idx_1 on currency_table( dollar_equiv desc );
call sa_flush_cache();
select count(*) from CURRENCY_TABLE with (index (currency_idx_1 ))
where DOLLAR_EQUIV < 1000 option( force optimization);
================(Build #3780 - Engineering Case #544948)================
The system function xp_sendmail() would have always encoded the subject line
of an email being sent. While this is properly decoded when the email is
delivered to an email client, it was not decoded in many instances when sent
via SMS. A change has been made to not encode the subject line when the subject
contains only 7-bit ascii characters. Attempting to send a message containing
non 7-bit ascii characters to a SMS client will still result in the subject
line being encoded. It will be up the carrier to properly convert from SMTP
to SMS.
================(Build #3779 - Engineering Case #546908)================
When running the Unload utility (dbunload) to unload a pre-10.0 version database,
the directory for the unloaded table data would not have been created. This
has now been fixed.
================(Build #3779 - Engineering Case #546172)================
If a parallel execution plan was executed using an exists join (JE / Exists
Join), then it was possible for the statement to return the wrong answer.
This has been fixed.
================(Build #3779 - Engineering Case #545785)================
If a SELECT statement referenced a single table and contained a TOP n or
FIRST clause, it was possible for a slow execution plan to be picked. In
order for this to have occurred, there needed to be at least two indexes
that could be used for the plan and the depth of the indexes needed to differ.
This has been fixed.
================(Build #3778 - Engineering Case #545455)================
A server started with the -zl or -zp command line options (or by calling
the system procedure sa_server_option() with RememberLastStatement or RememberLastPlan),
that services large numbers of HTTP connections could have crashed. This
issue would have been rare and highly timing dependent. This has now been
fixed.
================(Build #3778 - Engineering Case #544181)================
Calling the system function traced_plan() for a query containing captured
host variables could have failed and return a conversion error. When using
Profiling Mode in the Sybase Central plugin, this caused the profiling browser
to fail to display a "Best Guessed Plan" for a query whose original
graphical plan was not captured. This has been fixed.
================(Build #3777 - Engineering Case #545815)================
The database server could have leaked memory, and eventually failed with
an 'Out of Memory' error, when using TDS connections (eg. jConnect) that
fetched string data. This has now been fixed. This fix is in addition to
the memory leak that was fixed for Engineering case 543069.
================(Build #3777 - Engineering Case #544047)================
Validation of a database may have reported that some tables contained orphaned
blobs. This was only true for tables that were stored in a dbspace other
than the SYSTEM dbspace. This should also have only occurred on databases
using snapshot isolation. Databases containing these orphaned blobs have
pages which are being wasted. The only way to free these pages for reuse
is to rebuild the database file. This problem has been fix so that generating
orphaned blobs should no longer be possible.
================(Build #3775 - Engineering Case #545707)================
When running on Unix systems, the server could have hung and not proceeded
any further while generating a mini-core. This has been fixed.
================(Build #3775 - Engineering Case #545704)================
When trying to create a Transact-SQL function or procedure, use of the "expr
AS name" syntax in the arguments to a function call would have given
an error. This has been fixed.
A workaround is to write the function or procedure using the Watcom-SQL
dialect.
================(Build #3775 - Engineering Case #545383)================
Queries accessing a table via an index could have performed poorly after
performing many update and delete operations on the indexed table. If two
leaf pages that required cleaning were merged, the second of the two would
not have been cleaned, which could have resulted in many almost empty leaf
pages. This has been fixed.
================(Build #3775 - Engineering Case #544460)================
Particular forms of complex predicates could have caused the server to crash
when executed against a string column with declared size no more than 7 bytes.
This has been fixed.
================(Build #3775 - Engineering Case #543069)================
The server could have leaked memory, possibly leading to an 'Out of Memory'
error, when using TDS connections (eg. jConnect) that fetched string data.
This has now been corrected.
================(Build #3774 - Engineering Case #546854)================
If a query contained DISTINCT, ORDER BY and GROUP BY clauses and an expression
in the ORDER BY clause appeared in the the GROUP BY clause, but not in the
SELECT list, then the wrong error was returned, namely "Function or
column reference to ... must also appear in a GROUP BY." This has been
fixed so that the correct error message: "Function or column reference
to .. in the ORDER BY clause is invalid."
For example, the query:
SELECT DISTINCT X
FROM T
GROUP BY E
ORDER BY E
Would have returned the error: "Could not execute statement. Function
or column reference to 'E' must also appear in a GROUP BY." SQLCODE=-149,
ODBC 3 State="42000"
================(Build #3774 - Engineering Case #545621)================
The server may crash after a materialized view has been dropped. This has
been fixed.
================(Build #3774 - Engineering Case #545574)================
In certain circumstances, TLS connections that should have failed, would
have actually succeed. This has been fixed. Note that this problem does not
occur on Mac OS X systems.
================(Build #3773 - Engineering Case #545374)================
When using the SQL Anywhere ODBC driver, if SQLBindCol was called immediately
after a SQLFetch and before calling SQLBulkOperations( SQL_UPDATE_BY_BOOKMARK
), then the SQLBulkOperations update would have failed. This problem has
been fixed.
================(Build #3773 - Engineering Case #544187)================
A server could have failed to start if another server was starting at the
same time. The server that failed to start would have displayed the error
"Database cannot be started -- No such file or directory". The
error message was also misleading since the database file did exist; the
server actually had a problem opening the database's temporary file. This
has been fixed.
================(Build #3772 - Engineering Case #544669)================
If a column histogram incorrectly contained a selectivity estimate of 100%
for the NULL values, the best plan found by the optimizer could have been
very inefficient. This problem affected the computation of the selectivity
estimation of predicates of the form "T.X theta expression" (theta
can be =, <>, >, >=, < or <=) which would have incorrectly
been computed as 0%. A change has been made to the optimizer so that it no
longer trusts a NULL selectivity estimation of 100%, instead it uses the
computed selectivity estimation of (100% - epsilon).
To test the estimated selectivity of the NULL values for a column T.X use:
"select first estimate(X, null) from T".
A workaround is to recreate statistics on the column T.X by using: "create
statistics T (X)". However, if the column T.X has often only NULL values,
and then it is updated to some non-null values, it is recommended to upgrade
to a server containing this fix.
================(Build #3772 - Engineering Case #500489)================
Attempting to start the server may have failed with the error "License
file not found" if the followng conditions were true:
- the server was running on one of the Unix platforms (Linux, Solaris, AIX,
HP-UX, or Mac OSX), and the executable was located in a nonstandard location.
i.e. it is not in bin32 or bin64 directory
- the associated license file was in the same directory as the server (which
is where it should be)
- the PATH environment variable did not contain the directory where the
executable was located
- the support libraries (libdbserv10_r.so, libdbtasks10_r.so, etc) were
located in a different directory than the executable
- the user's current working directory was not the same directory as where
the executable was located
- when attempting to start the server a full or relative pathname to the
executable was specified
This has been fixed. The server will now start correctly.
Work arounds include:
- add the directory that contains the server executable to the PATH
- make the executable directory the current directory before starting the
server
- name the directory that contains the executable bin32 or bin64
================(Build #3771 - Engineering Case #545684)================
When optimizing simple SQL statements the server was skipping some of the
optimizations implemented to improve DESCRIBE time. This has been corrected.
For more information see:
SQL Anywhere Server - SQL Usage
Query Optimizer
Query optimization and execution
Query processing phases
================(Build #3771 - Engineering Case #544791)================
In rare timing dependent cases, the server could have hung on shutdown, or
possibly failed in other ways, after executing a DROP CONNECTION statement.
This has now been fixed.
================(Build #3771 - Engineering Case #543694)================
When using Snapshot isolation, WITH HOLD cursors would have failed to see
rows modified by their own connection after the connection executed a COMMIT.
This has been fixed so that when using Snapshot, Statement-Snapshot or Readonly-Statement-Snapshot
isolation,
WITH HOLD cursors will see a snapshot of all rows committed at the snapshot
start time, as well as all modifications done by the current connection since
the start of the transaction within which the cursor was open.
Note that the contents of the cursors are unaffected by the current transaction
committing.
================(Build #3771 - Engineering Case #539106)================
In some cases where expressions were evaluated in stored procedures or batches
outside of SELECT, INSERT, UPDATE or DELETE statements, it was possible for
the expressions to be evaluated incorrectly. The incorrect behaviour would
have appeared if arithmetic expressions were used with one argument a DATE,
TIME, or TIMESTAMP, or both arguments were strings. In these cases, the incorrect
domain could have been used for the arithmetic expression if it were used
in an IF, CASE, IN, or concatenation operation.
For example, the following select improperly returned '0002', the correct
answer should be a numeric with value 2.
create variable @v_res long varchar;
set @v_res = if 1=1 then '0002' else '1' - '2' endif;
select @v_res
This problem could have also resulted in conversion errors being returned
in cases where they should not, or missed in cases where they should have
been generated. This problem has now been fixed.
================(Build #3770 - Engineering Case #544961)================
The Stored Procedure Debugger was not able to set breakpoints on statements
within exception handlers. This has been fixed.
================(Build #3770 - Engineering Case #530302)================
Attempting to executing a batch which did not take a host variable, but included
the :var syntax, could have resulted in a communication error. The :var
syntax can be used in a CREATE or ALTER SERVICE statement. This has now been
fixed.
================(Build #3769 - Engineering Case #544670)================
In some cases, statements containing complex expressions could have used
an excessive amount of memory that could affect other connections in the
server. This has been fixed so that attempts to execute large expressions
that can not be handled will now generate the error:
-890 "Statement size or complexity exceeds server limits"
================(Build #3769 - Engineering Case #544496)================
If the server was started with the "-x none" command line option,
and without the -xs option, then calling an external web procedure would
have caused the server to crash. This has been fixed.
================(Build #3769 - Engineering Case #544318)================
Specific forms of SELECT statements could have caused a server crash when
opened with particular cursor types. This problem has been fixed.
As an interim work-around, the server command line switch "-hW AllowBypassCosted"
can be specified to avoid this problem.
================(Build #3769 - Engineering Case #544199)================
In rare situations, the server could have crashed during graphical plan construction.
For the problem to occur, one of the tables used in the query had to have
a unique index and a foreign key index sharing the same columns and settings,
and the index had to be considered or used for the query. This has been fixed.
================(Build #3768 - Engineering Case #543940)================
The server could have stop responding and continue to consume CPU when processing
the SUBSTR() function. For this to have occurred, the SUBSTR() must appear
on the right hand side of a string concatenation operation and must also
be over a string that comes from a row in a table. Additionally, the string
data must be greater than one database page in length. Even if all these
conditions are met, it is very unlikely that this bug will be hit, as it
depends on other internal server conditions as well. This has now been fixed.
================(Build #3768 - Engineering Case #543647)================
The REFRESH MATERIALIZED VIEW statement is used to rebuild the contents of
a materialized view. When this statement was used inside a stored procedure,
execution of the procedure could have caused the server to crash under certain
circumstances. This problem has been corrected, and the server now executes
the stored procedure correctly. The problem can be avoided by using EXECUTE
IMMEDIATE with the REFRESH statement.
================(Build #3768 - Engineering Case #543631)================
If a simple statement was executed with a particular form of ORDER BY clause,
then the server could have crashed while executing the statement. This has
been fixed.
================(Build #3768 - Engineering Case #542186)================
Under rare circumstances, diagnostic tracing could have failed to record
some cursor information for statements within procedures, for example, information
about cursor close time and the graphical plan. This has been fixed.
================(Build #3767 - Engineering Case #544486)================
If an application connected using Open Client attempted to fetch a result
set containing a large number of columns (more than 3000), then the application
would have failed with a TDS protocol error. This problem has now been fixed.
Note, that in order to fetch such a result set, the application must be
using Open Client 15.
================(Build #3767 - Engineering Case #543910)================
The version 10 and 11 servers were truncating 32-byte names to 31 bytes.
So when a version 10 or 11 client attempted a shared memory connection specifying
a 32-byte server name that had a common prefix of exactly 31 bytes with a
running version 10 or 11 server that also has a 31-byte name, the connection
attempt would have failed. As well, if a version 10 or 11 client attempted
a shared memory connection specifying a server name that had a common prefix
of exactly 31 bytes with a running version 9 or prior server that had a name
longer than 31 bytes, the connection attempt would have failed. This problem
has been fixed. Note that for version 10 and 11, the fix affects both client
and server. For version 9, the fix is just to the server. However, an unmodified
version 10 or 11 client will be able to establish such a connection against
an unmodified version 9
server.
================(Build #3767 - Engineering Case #543812)================
If a user caused an event to fire, e.g. by disconnecting to fire a Disconnect
event, and another user immediately caused that user to be dropped, the server
would have crashed. This has been fixed.
================(Build #3764 - Engineering Case #543835)================
The functions YEARS(), MONTHS(), WEEKS(), DAYS(), HOURS(), MINUTES(), and
SECONDS() could have been described with the incorrect data type. If these
functions were used with two parameters with the second parameter an untyped
expression, then the expression was assigned the incorrect data type. Untyped
expressions include NULL constant literals and host variable that are not
yet bound to a type, for example during DESCRIBE.
For example, the following expression was incorrectly described as TIMESTAMP
(it should be INT):
select months( current timestamp, NULL )
This incorrect type could have affected more complex expressions composed
with one of the affected functions as an argument. This problem has been
fixed.
Note, this change could alter the definition of materialized views; views
containing the affected constructs should be refreshed.
================(Build #3764 - Engineering Case #543261)================
The server could have hang while concurrently updating blob columns. This
has been fixed.
================(Build #3763 - Engineering Case #543826)================
If an application called sp_remote_columns to determine the domain ids of
an UltraLite table, and the UltraLite table contained a UniqueIdentifier
column, then the domain id of the uniqueidentifer column would have been
incorrectly returned as Char. This problem has now been fixed.
================(Build #3763 - Engineering Case #543562)================
If an application was connected using jConnect and attempted to fetch a result
set containing a large number of columns (more than 3000), then the application
would have failed with a TDS protocol error. This problem has now been fixed.
Note, that in order to fetch such a result set, the application must be
using jConnect 6.x.
================(Build #3763 - Engineering Case #543518)================
The SQL Anywhere http option "AcceptCharset" generated a SQL error
with "SQLCODE -939 Invalid setting for HTTP option" when a match
was not found within the union of the client's Accept-Charset list and the
server's AcceptCharset http option charset list. This has been fixed.
With this change a SQL error is now generated only if the http option value
is malformed or none of the charsets within the value are supported by SQL
Anywhere. In addition, the run-time selection of a suitable response charset
has changed to provide more control over the charset selection. Primarily,
given that the union of server and client charset lists are empty, a charset
is now selected based on the server's AcceptCharset http option value not
from the client's Accept-Charset request header. The old behaviour is supported
by allowing an asterisk (*) to be specified within the AcceptCharset http
option list. An asterisk has the meaning that, should the client/server
charset union be empty, try to use the preferred charset specified by the
client. If none are found, then select from the server's AcceptCharset
http option list. A summary of the processing priority of the various cases
follow:
Processing Priority cases:
1 - If a charset can be selected from the union of charsets from the AcceptCharset
http option and the Accept-Charset HTTP request header, then it will be
used (no change in behaviour).
2 - If the client sends an Accept-Charset HTTP request header, but none
of the charsets match the AcceptCharset http option, then use the first and/or
highest q-value charset specified within the AcceptCharset http option.
(This is a behaviour change).
3 - If the client does not send an Accept-Charset HTTP request header, select
the first and/or highest q-value charset specified within the AcceptCharset
http option (no change in behaviour).
Caveats:
a. Within the AcceptCharset http option value, the placement of the '+'
token (which specifies the use of database charset whenever possible regardless
of the q-value assigned to it by the client) is now significant. If '+'
is specified anywhere within the http option value then case 1) will be true
if the client happens to specify the database charset. If '+' is specified
first and/or it has the highest q-value, then cases 2) and 3) above would
resolve to using the database charset.
b. Within the AcceptCharset http option value, an asterisk (*) signifies
that any client charset (as specified by its Accept-Charset HTTP header)
should be considered prior to defaulting to the http option charset list.
The best match within the union of client/server charsets ( case 1) ) has
priority. In the processing priority cases above, having failed case 1)
the client's Accept-Charset is scanned for a suitable charset. If a suitable
charset is not found, then a charset is selected according to case 3).
c. SQLCODE -939 error is only generated if the http option value is malformed
or none of the specified charsets within its value is supported by SQL Anywhere.
The database charset is selected whenever a SQLCODE -939 error is generated.
================(Build #3762 - Engineering Case #541742)================
Virtual tables were considered non-updatable, which was incorrect. The server
may have crashed if an UPDATE statement targeted a virtual table. This has
been fixed.
================(Build #3761 - Engineering Case #542962)================
On a busy server, if an application made an external environment or Java
call which could have resulted in a thread deadlock error, there was a small
possibility that the server would have hung. This problem has now been fixed.
================(Build #3760 - Engineering Case #543245)================
In certain configurations, executing the REMOTE RESET statement can cause
the server to crash. This has been fixed.
================(Build #3760 - Engineering Case #530736)================
The changes for Engineering case 491180 (enable write through on CE) introduced
a dependency on the file note_prj.dll, which may not have been included on
non-standard Windows CE devices. On these devices, the server may have failed
to start with an error that it could not find the server or one of its components.
The Standard Windows Mobile devices were not affected. This has been corrected
and note_prj.dll is now loaded dynamically, and if it is not found, the server
will not enable write through on pre CE 5 devices.
================(Build #3759 - Engineering Case #543006)================
If an application was using a JDBC based Remote Data Access server to fetch
long multi-byte string data, then there was a possibility the server would
have crashed. This problem has been fixed.
================(Build #3759 - Engineering Case #543002)================
An HTTP server response returning an error status, such as "404 Not
Found", was returned in the server's language and the Content-Type header
incorrectly specifies charset=ISO-8859-1. This has been fixed so that HTTP
status messages are now always returned in the English language. Therefore
the Content-Type header charset=ISO-8859-1 will now be correct.
================(Build #3759 - Engineering Case #542959)================
The Interactive SQL utility's (dbisqlc) OUTPUT statement was incorrectly
using the value (NULL) for null values, instead of using a blank value. This
has been fixed.
This can be worked around by using dbisql or by correcting the output_nulls
Interactive SQL option using the statement:
set option output_nulls = ''
================(Build #3759 - Engineering Case #542868)================
In rare circumstances the server could have hung updating a blob column.
This has been fixed.
================(Build #3759 - Engineering Case #542840)================
If a Disconnect event was defined and a connection was dropped using DROP
CONNECTION, the value of event_parameter('DisconnectReason') would have been
incorrect when evaluated inside the event handler. This has been fixed.
================(Build #3759 - Engineering Case #541857)================
Transactions blocked on a row lock placed by an INSERT, UPDATE, or an isolation
level 2 or 3 FETCH, may have waited on the wrong connection, or may have
waited indefinitely (until the transaction was forcibly aborted). For this
to have occurred, the connection holding the lock must have been in the process
of disconnecting when the transaction blocked. While correctness was not
affected, application performance could have suffered. This has now been
fixed.
================(Build #3758 - Engineering Case #542825)================
SQL Anywhere attempts to create a single physical data structure when multiple
indexes on the same table are created with identical properties. The dbspace
id recorded in the catalog for a newly created index referred to the dbspace
id for the new logical index instead of the dbspace id of the existing physical
index shared by the new index. This problem has been corrected, and the server
will now record the dbspace id of the existing index whenever sharing takes
place.
Note, existing databases with this problem can be corrected by dropping
and recreating the logical indexes sharing a physical index. Whether or not
an existing database has an instance of this problem can be determined by
checking the physical_index_id and the file_id fields of the system view
SYS.SYSIDX. The problem cases exist in a database when two indexes on the
same table have the same physical_index_id values but their file_id values
differ.
================(Build #3757 - Engineering Case #542514)================
In a SQL Anywhere SOAP response, binary data types greater than 250 bytes
in length were not base64 encoded. This has been fixed, and applies to SQL
Anywhere SOAP services that have been defined with DATATYPE ON or DATATYPE
OUT.
================(Build #3756 - Engineering Case #542524)================
If an application on a Unix system used the iAnywhere JDBC driver to connect
to a DB2 server using a 64-bit DB2 ODBC driver, then calling the Connection.getTransactionIsolationLevel()
method may have crashed the client. This problem has been fixed.
================(Build #3756 - Engineering Case #542519)================
If an application made a large number of remote calls to fetch data from
a JDBC based Remote Data Access server, then there was a chance the server
would have crashed. For this problem to have occurred, the remote table and/or
column names must have been longer than 30 characters. This problem has now
been fixed.
================(Build #3756 - Engineering Case #542397)================
The DISH service does not set the HTTP Content-Type response header, which
has occasionally caused Internet Explorer 7 to fail to render the WSDL. This
has been fixed so that the response headers now include Content-Type: text/xml;
charset="utf-8".
Note, the charset qualifier is not included in 9.0.2 since its output is
in database character set. This change is in accordance with the WSDL 1.1
specification, see http://www.w3.org/TR/wsdl#_Toc492291097.
================(Build #3756 - Engineering Case #542139)================
Sybase Central would have reported errors when attempting to browse a database
that had the quoted_identifier option set to Off. SQL statements sent to
the database had reserved words that were used as system table columns quoted
(for example, SYS.SYSTAB."encrypted"). This did not work if the
quoted_identifier option was Off, so the plug-in now temporarily sets it
to On.
================(Build #3756 - Engineering Case #536541)================
If an application attempted to update or delete from a proxy table joined
with a local table, then the server may have failed an assertion, or crashed.
The server will now correctly give error -728 'Update operation attempted
on non-updatable remote query'.
================(Build #3753 - Engineering Case #541060)================
The server, in rare circumstances, could have hung updating string columns.
This has been fixed.
================(Build #3752 - Engineering Case #545528)================
Inexpensive statements may have taken a long time to optimize (i.e. OPEN
time was high), or may have had inefficient access plans. This has now been
fixed. The only condition required for this to happen was that the parallel
access plans were considered by the optimizer.
For more info on intra-query parallelism see:
SQL Anywhere Server - SQL Usage
Query Optimizer
Query optimization and execution
Query execution algorithms
Parallelism during query execution
This change is particularly important when moving to version 11.0.1, from
10.0.1 or 11.0.0, and running the personal server (dbeng11). The 10.0.1 personal
server (dbeng10) and 11.0.0 personal server (dbeng11) are restricted to using
only one CPU, and only one core if the CPU has multiple cores.
Also, the 10.0.1 optimizer did not consider the number of maximum concurrent
threads (i.e. ConcurrentThreads global variable), and may generate parallel
plans which will not be executed in parallel by the 10.0.1 personal server,
only one parallel branch will process all the rows. This is a bug which was
fixed in 11.0.0 GA.
The 11.0.1 personal server can use all the cores available in one CPU, which
means the 11.0.1 optimizer will cost and generate access plans using parallel
physical operators when multiple cores are available. This difference in
behaviour related to the number of cores allowed to be used by the personal
server may result in a very different access plan being executed by 11.0.1
compared to the access plan executed by 11.0.0, for the same SQL statement.
================(Build #3752 - Engineering Case #541772)================
Calling the system function property('FunctionMaxParms',0) would have returned
NULL, instead of the correct value 0. This has been fixed. This corresponds
to the maximum number of arguments for the abs function.
================(Build #3752 - Engineering Case #541770)================
If an application connected to a database with a multi-byte character set
made a remote procedure call using one of the JDBC remote server classes,
then there is a chance the server could have either hung or crash. For this
to have occurred, the remote procedure must return a result set containing
long character columns, and the proxy procedure must not have initially been
defined with a proper result clause, This problem has now been fixed.
================(Build #3752 - Engineering Case #541744)================
If a CREATE EXISTING TABLE command was used to create a proxy table to a
remote server using one of the JDBC remote server classes, then the server
would have leaked memory. This problem has now been fixed.
================(Build #3752 - Engineering Case #541622)================
If a Transact-SQL CREATE PROCEDURE statement appeared within a BEGIN ...
END block, the syntax error given would have been "Syntax error near
'end' on line nnn", where nnn was the line corresponding to the end
of the block. Now the error points to the first point at which the server
detected a dialect conflict.
================(Build #3752 - Engineering Case #541615)================
If an application made a remote procedure call to a procedure that returned
a result set with unsigned data types, there was a possibility that the call
would have failed with a conversion error. This problem has now been fixed.
================(Build #3751 - Engineering Case #541072)================
Under rare circumstances, a query plan using the Merge Join algorithm with
an GroupBy ordered on the right hand side of the join, could have returned
incorrect results. This has been fixed.
================(Build #3751 - Engineering Case #540387)================
When an application made an external environment procedure call, and then
issued a commit followed by another external environment call, there was
a chance the server would have crashed. This problem should not show up if
either the original or external connection was accessing a temporary table.
It has now been fixed
================(Build #3750 - Engineering Case #541333)================
Under concurrent access, a connection may have blocked on row locks waiting
for other connections that had long released their row locks. This would
only have happened
if the connection had no changes to commit. This has been fixed.
================(Build #3749 - Engineering Case #536370)================
For workloads that consisted of very inexpensive queries (for example, where
each statement was executed in less than a millisecond), the server performance
was slower than previous versions. This has been improved. As part of this
change, a larger class of statements now bypasses query optimization. The
properties QueryBypassed and QueryOptimized can be used to measure how many
statements bypass optimization, or use the full query optimizer. The time
required when a table was first accessed could have been slower than previous
versions. This was particularly true for databases with many columns, and
was very noticeable on CE platforms. This has been fixed.
Further, in some cases the plan text for a simple statement could show the
table name instead of the correlation name. This has also been fixed.
================(Build #3748 - Engineering Case #541861)================
The server when run on Solaris systems, had poor performance compared to
previous versions. Specifically, the TCP/IP communication was slower. Serveral
changes have been made to correct this.
================(Build #3748 - Engineering Case #541175)================
It was possible, although likely rare, for the server to crash on shutdown.
This has been fixed.
================(Build #3747 - Engineering Case #541201)================
When running Application Profiling, the start_time and finish_time columns
of the sa_diagnostic_request table were incorrectly set. The column start_time
was set to the correct start time plus the value from the duration_ms column,
while the column finish_time was set to the correct start time plus twice
the value from the duration_ms column. This has now been corrected.
================(Build #3747 - Engineering Case #541200)================
Some missing items to the graphical and long plans have been added as follows:
1 - HAVING predicate was not dumped in the long plan for any GroupBy physical
operator. 2 - HAVING predicate was not dumped for GroupBySortedSets physical
operators in the graphical plan.
3 - The number of extension pages was missing in "Table Reference"
section of the graphical plan.
4 - The 'Estimated Cache Pages' was missing in the long plan.
================(Build #3747 - Engineering Case #540569)================
Statements using EXISTS() subqueries with INTERSECT and EXCEPT may have returned
incorrect results. This would have occurred when at least one of the select
lists inside the EXISTS() subquery used "*". This has now been
fixed.
For example:
select filename, file_id from t1 where
(
exists (select * from t1 except select * from t2)
OR
exists (select * from t2 except select * from t1)
)
================(Build #3747 - Engineering Case #536805)================
Grouping queries containing a CUBE, ROLLUP or GROUPING SETS clause may have
returned incorrect results. The query must also have had a HAVING clause
with at least one null sensitive predicate (e.g., 'T.C IS NULL' , 'T.C IS
NOT NULL' ). This has been fixed.
An example:
select dim1, dim2, sum (val1), stddev (val2)
from tt
group by cube (dim1, dim2)
having dim1 is not null or dim2 is not null
================(Build #3746 - Engineering Case #541073)================
On AIX 6, 64-bit software would not have found the LDAP support libraries,
even if they were in the LIBPATH. The location of the LDAP system libraries
was changed in AIX 6. The 64-bit library is in:
/opt/IBM/ldap/V6.1/lib64/libibmldap.a
and the 32-bit library is in:
/opt/IBM/ldap/V6.1/lib/libibmldap.a
This has been fixed.
Note that you still need to ensure that the directory with the LDAP libraries
are in the LIBPATH. For example, for 64-bit libraries:
export LIBPATH=/opt/IBM/ldap/V6.1/lib64:$LIBPATH
and for 32-bit libraries:
export LIBPATH=/opt/IBM/ldap/V6.1/lib:$LIBPATH
As a work around to use SQL Anywhere LDAP support with AIX 6, create links
in /usr/lib as follows (must be root):
cd /usr/lib
ln -s /opt/IBM/ldap/V6.1/lib64/libibmldap.a libibmldap64.a
ln -s /opt/IBM/ldap/V6.1/lib/libibmldap.a
================(Build #3746 - Engineering Case #540575)================
When in Profiling Mode in Sybase Central, clicking the Index Consultant,
or DBISQL, icon for a statement on the Details tab, could have resulted in
a syntax error. The problem was caused by syntax errors in SQL statements
used by Sybase Central, which have now been fixed.
================(Build #3746 - Engineering Case #540048)================
When attempting to set the non_keywords option to a value that contained
a keyword already listed in the current value of the non_keywords option,
an invalid option setting error would have been reported. This has been fixed.
================(Build #3745 - Engineering Case #540921)================
Attempting to access a remote server defined using one of the JDBC classes,
could have caused the server to crash if Java failed to start. This problem
has now been fixed.
================(Build #3744 - Engineering Case #540703)================
When attempting to execute a query that references a proxy table mapped to
a DB2 table, and one of the columns in the DB2 table was of type "varchar
for bit data", there was a possiblility that fetching data from the
proxy column would have resulted in data truncation. This problem does not
exist for BLOB, "char for bit data" and "long varchar for
bit data" DB2 columns. The has now been fixed.
================(Build #3743 - Engineering Case #540681)================
Running out of non-cache memory may have caused the server to hang. This
has been fixed.
================(Build #3743 - Engineering Case #537555)================
Dropping a table could, in rare circumstances, have caused the server to
fail assertion 102806 "Unable to delete row from SYSTABLE". This
has been fixed.
================(Build #3741 - Engineering Case #540380)================
On AIX 5.3 systems, the ApproximateCPUTime connection property could have
returned a value that was impossibly large. This has been fixed.
================(Build #3741 - Engineering Case #540371)================
In rare circumstances, the server could have crashed while disconnecting,
if the connection had created temporary procedures. This has been fixed.
================(Build #3741 - Engineering Case #540369)================
If request level logging of procedures was enabled, and a FORWARD TO statement
was executed on a remote server from an Open Client or jConnect application,
then there was a chance the server would have crashed. This problem did not
occur if a non-TDS based client was used, or if request level logging of
procedures was not enabled. This has been fixed.
================(Build #3741 - Engineering Case #540205)================
If a remote server was defined using one of the Remote Data Access JDBC classes,
then changing the value of the quoted_identifier option would not have resulted
in changing the value of the quoted_identifier option on the remote. This
problem has now been fixed.
================(Build #3741 - Engineering Case #540071)================
If an application used a prepared statement to insert an empty string via
a parameter marker into a long varchar column of a proxy table, then the
server may have hung, or have given a strange error. Note that inserting
an empty string as a string literal works just fine. This problem has now
been fixed.
================(Build #3741 - Engineering Case #532086)================
If a server had many concurrent connections using Java in the database support,
then there was a chance the server could either have hung, or crashed intermittently.
These hangs or crashes could also have occurred at server shutdown time.
These problems have now been fixed.
================(Build #3740 - Engineering Case #540094)================
In rare circumstances, an outgoing mirroring connection attempt to a partner,
or to the arbiter, may have hung indefinitely. This has been fixed.
================(Build #3739 - Engineering Case #539807)================
On Mac OS X systems, if the server was started on a non-default port (i.e.
other than 2638), and with an IPv6 address as the value for the MyIP option,
a UDP listener would not have been started on the default port. As a result,
the server would not have been able to locate the server via broadcasts unless
the sever's port was explicitly specified in the client's connection string.
This has now been fixed.
================(Build #3736 - Engineering Case #532859)================
It was possible to get gaps between transaction logs when using the Backup
Database command to rename a transaction log, or when using dbmlsync -x to
rename and restart a transaction log. It was also possible, although more
unlikely, to have a transaction log that was missing a transaction that was
already committed to the database. This has been fixed.
================(Build #3735 - Engineering Case #539091)================
The 64-bit server for Sun Solaris performed poorly when executing queries.
This has been
fixed.
================(Build #3734 - Engineering Case #538883)================
If the ansi_close_cursors_on_rollback option was set to 'ON', and request
logging of plans was enabled, the server could have crashed. This has been
fixed.
================(Build #3733 - Engineering Case #537337)================
Calling the ODBC function SQLGetProcedureColumns() would have failed with
the error -143 "Column 'remarks' not found" when using a SQL Anywhere
ODBC driver from a version prior to 10.0 and connected to version 10.0 or
later server. This was due the ODBC drivers prior to version 10.0 referencing
the SYSPROCPARM.remarks column in the SQLGetProcedureColumns() function,
which had been dropped in version 10.0 and later database files. The SYSPROCPARM.remarks
column has been re-added as a constant NULL.
================(Build #3731 - Engineering Case #538303)================
If while executing an ATTACH TRACING statement, the tracing server was stopped,
the server being traced could have crashed. This has been fixed.
================(Build #3730 - Engineering Case #537965)================
Executing a STOP JAVA command may have, in rare circumstances, caused the
server to crash. This has been fixed.
================(Build #3730 - Engineering Case #535799)================
Executing queries using SELECT FIRST or SELECT TOP N, referencing proxy tables
to a remote DB2 server, would have failed with a syntax error. DB2 does not
support the FIRST and TOP N syntax; instead, the query must use FETCH FIRST
ROW ONLY for FIRST, or FETCH FIRST N ROWS ONLY for TOP N. This problem has
now been fixed.
================(Build #3729 - Engineering Case #537800)================
If an application executed a remote query with a malformed field or dotted
reference in the select list, then it was possible that the server could
have crashed. An example of such a query is:
select c1.ref(), max( c2 ), c2 from t1
where c1.ref() is a meaningless expression. This problem has now been fixed
and a proper error message will be returned to the application.
================(Build #3727 - Engineering Case #537616)================
Under rare circumstances, the server could have gone into an infinite loop
after a non-recurring scheduled event was run. Any attempts to communicate
with the database on which the event was scheduled would have blocked. This
has been fixed.
================(Build #3727 - Engineering Case #537560)================
It was possible for calls to DB_Property( 'DriveType' ) on AIX systems to
erroneously return "UNKNOWN". A buffer used to enumerate the various
mounted filesystems may have been too small. This has been fixed.
================(Build #3725 - Engineering Case #536808)================
The server tracks dependencies of views on other views and tables. If a view
referenced another view and the view definition of the referenced view was
"flattened" or "inlined" within that of the referencing
view, then the server could have failed to correctly record the dependency
information. The server now behaves correctly when recording dependency information
in this situation. Any existing views can have their dependency information
recorded correctly by being recompiled.
================(Build #3724 - Engineering Case #536739)================
The server could could have raised assertion 102802 - "Unable to undo
index changes resulting from a failed column alteration" if an ALTER
statement failed, or was cancelled. This has now ben fixed.
================(Build #3724 - Engineering Case #536594)================
If an external function that was defined to return an integer value was assigned
to a variable declared as INT, a "Value out of range for destination"
error would have been given. This has been fixed.
================(Build #3724 - Engineering Case #536588)================
If an application connected using a TDS based client (i.e. jConnect, iAnywhere
JDBC) and attempted to use a procedure in the FROM clause of a SELECT statement,
then the TDS client may have reported a TDS protocol error. This problem
has now been fixed.
================(Build #3723 - Engineering Case #536015)================
The ALTER VIEW RECOMPILE statement can be used to rebuild the view definition
of an existing view. Among other things, the statement causes the schema
of the view columns to be regenerated. If column permissions, as opposed
to table permissions, have been granted on a view, then the recompilation
could have failed with a foreign key constraint violation on SYS.SYSCOLUMN.
The server now remembers all the column permissions on the view that exist
before the recompile statement is executed. After the view has been recompiled,
the server automatically restores the old column permissions based on column
name look-ups in the new view definition. Note that if a column of the view
that no longer exists after the recompilation will have the old permissions
lost. A workaround is to drop the column permissions and to restore them
after the view recompilation.
See also Engineering case 534294 for a related issue.
================(Build #3723 - Engineering Case #534294)================
The server keeps track of the dependencies of views on other tables and views.
When the schema of a table is modified by using the ALTER TABLE statement,
the server automatically and atomically recompiles all views whose view definitions
depend upon the schema of the table being modified. All views that can be
compiled without errors with the new table schema are rebuilt persistently
in the catalog and remain valid after reflecting the changes in the table
schema. Views that fail to compile are left in a state where the server automatically
tries to recompile them in the future. If column permissions, as opposed
to table permissions, had been granted on a view dependent on the table
being modified, the execution of ALTER TABLE could have failed with referential
integrity violations on SYS.SYSTABCOL. This has been corrected so that the
server now automatically attempts to restore the old column permissions on
views that are recompiled as a consequence of ALTER TABLE. Permissions on
columns that no longer exist in the recompiled view(s) are lost.
See Engineering case 536015 for a related issue.
================(Build #3722 - Engineering Case #535988)================
Attempting to setting the inline or prefix amount of a blob column to 32768
on a 32K pagesize database would have failed with the error:
'Illegal column definition: Column 'xxx' inline value '-32768' is invalid"
This has now been fixed. A workaround is to use the value 32767. Doing
so does not affect the amount of inline space available for the column as
there is always some page overhead that is unusable for prefix data.
================(Build #3721 - Engineering Case #535804)================
Values for SOAP input TIME and DATETIME data types were incorrectly converted
to the server's locale if the value contained a negative time zone offset,
with a nonzero minute field, i.e. GMT-03:30 (Newfoundland). This has been
fixed.
In addition the processing of DATE values has modified with this change.
The TZ offset if provided with an input DATE value is now ignored, and the
TZ offset is no longer appended to an output DATE value (within the HTTP/SOAP
response).
================(Build #3721 - Engineering Case #535627)================
The database properties CleanablePagesAdded and CleanablePagesCleaned could
have reported that there were pages to clean when in actuallity there were
none. This would have happened if a dbspace with cleanable pages was dropped.
This has now been fixed.
================(Build #3721 - Engineering Case #534927)================
Control of an HTTP session time-out duration was not passed to subsequent
HTTP requests belonging to the same session if a TEMPORARY HTTP_SESSION_TIMEOUT
database option had been set (in a previous HTTP request belonging to the
session). The scope of the problem applied to all TEMPORARY database options
set within an HTTP session context. The problem was due to user id being
reset for each HTTP request. This has been corrected so that an HTTP request
within a session context will no longer reset its user id if it is identical
to the user id of the current service.
The problem remained however if an HTTP session was used to call a service
that specified a different user id. A SA web application using HTTP sessions
should only use TEMPORARY and/or USER specific options when all requests
within the HTTP session context access SA services defined with the same
user id. Similarly, accessing an authenticated SERVICE would require that
the HTTP request belonging to a session provide the same user id from request
to request. To address this, a new HTTP OPTION called SessionTimeout has
been added to make HTTP session time-out criteria persistent in all cases.
It can be set from within an HTTP request that has defined, or will define,
a SessionID. The context of the setting is preserved throughout the HTTP
session, until it expires, is deleted or changed (with a subsequent SA_SET_HTTP_OPTION
call).
- New SA_SET_HTTP_OPTION option SessionTimeout
The value of this HTTP OPTION is specified in minutes. It is subject to
the minimum and maximum constraints of the HTTP_SESSION_TIMEOUT database
option. A newly created session is implicitly assigned the current or default
PUBLIC/USER HTTP_SESSION_TIMEOUT.
The following example sets a given HTTP session time-out to 5 minutes:
call SA_SET_HTTP_OPTION('SessionTimeout', '5');
An empty value resets the option to its default value, or as set by the
PUBLIC or USER scope HTTP_SESSION_TIMEOUT database option.
call SA_SET_HTTP_OPTION('SessionTimeout', ''); // resets the time-out to
30 minutes - the default value of the HTTP_SESSION_TIMEOUT database option
SET OPTION PUBLIC.HTTP_SESSION_TIMEOUT=1 // New HTTP sessions calling SA_SET_HTTP_OPTION('SessionTimeout',
'') set session time-out to 1 minute
SET OPTION USERA.HTTP_SESSION_TIMEOUT=15 // New HTTP sessions calling SA_SET_HTTP_OPTION('SessionTimeout',
'') set session time-out to 15 minutes for USERA
NOTE: HTTP session default criteria is derived from the current PUBLIC/USER
HTTP_SESSION_TIMEOUT database option setting. Any subsequent changes to
this option will not implicitly affect existing HTTP sessions. The default
timeout setting for HTTP sessions that always use the same user id remains
unchanged. However, an HTTP request belonging to a session that calls a
service with an alternate user id will force its cache to be cleared and
the option defaults of the current user to be loaded. Therefore, when the
session switches users all TEMPORARY options are lost and the current PUBLIC/USER
options are assigned.
- New CONNECTION_PROPERTY('SessionTimeout')
Returns the time-out value in minutes for a given database connection belonging
to an HTTP session. The value is the current setting for SA_SET_HTTP_OPTION('SessionTimeout',
'X').
A value of 0 is returned if the database connection does not belong to an
HTTP session. As before, the HTTP_SESSION_TIMEOUT database option may be
queried to determine the PUBLIC/USER default values.
- Summary of changes
TEMPORARY and USER scope options are preserved when HTTP requests belonging
to a session execute SA services defined with a specific (the same) user
id.
SessionTimeout HTTP_OPTION has been added to provide an HTTP session context
time-out criteria. Its use is recommended in place of setting a TEMPORARY
HTTP_SESSION_TIMEOUT database option since it is guaranteed to persist for
the life of the session.
================(Build #3719 - Engineering Case #534963)================
The server tracks dependencies of views on other views and tables. If a table
is referenced by other views, attempting to execute an ALTER TABLE statement
on the referenced table could have caused the server to crash under certain
circumstances. This has been fixed, the server now carries out the ALTER
properly.
A workaround is to disable the dependent view before executing the ALTER
statement, followed by a re-enabling of the view.
================(Build #3719 - Engineering Case #530287)================
Indexes containing values longer than approx 250 bytes could have become
corrupt when an entry was deleted from the index. This has now been fixed.
================(Build #3718 - Engineering Case #534496)================
An expression that converted an integer value to a NUMERIC or a DECIMAL,
could have leaked memory in the server if an overflow error was generated.
If enough of these expressions were evaluated, server execution could have
been impaired. This has been fixed.
================(Build #3718 - Engineering Case #534358)================
Use of any of the TLS options "certificate_name", "certificate_unit",
or "certificate_company" would have caused connections to fail
with a "TLS handshake failure" error. This has been fixed. As a
workaround, the options "name", "unit", and "company"
can be used.
================(Build #3717 - Engineering Case #534324)================
Statements that appeared in stored procedures may have used a cached execution
plan (see Plan caching in the documentation). In some cases, a stale value
of a builtin function could have been returned for subsequent executions
of the statement. This has now been corrected. The following builtin functions
were affected by this issue:
connection_extended_property
connection_property
db_extended_property
db_property
estimate
estimate_source
event_condition
event_condition_name
event_parameter
experience_estimate
http_body
http_header
http_variable
index_enabled
index_estimate
next_connection
next_database
next_http_header
next_http_variable
next_soap_header
property
rewrite
soap_header
varexists
watcomsql
For web services based on sessions, connection properties such as SessionLastTime
could also have been affected by this (among other builtins). This incorrect
behaviour was masked in version 10.0.1 for web services using sessions.
================(Build #3717 - Engineering Case #534132)================
If many rows had been deleted from the end of an index, and the server was
under heavy load for some period of time after that, there was a chance that
the server could have crashed.
================(Build #3717 - Engineering Case #533724)================
The server would have crashed if the sa_locks() system procedure was executed
when it was running in bulk operation mode (-b server command line option).
This has been fixed.
================(Build #3716 - Engineering Case #533802)================
Execution of a SELECT statement that referenced a procedure call in the FROM
clause could have resulted in a server crash. For this to have occurred,
the connection must have had several cursors open, or have had several prepared
statements. This has now been fixed.
================(Build #3716 - Engineering Case #533793)================
If a server had multiple databases loaded, each with a different character
set, the database name returned by the system function "db_property('Name',
<dbid>)" could have been improperly character set converted.
This could have made the name returned appear garbled. For this to have occurred,
the database ID specified by "<dbid>" must have been different
from the ID of the database of the connection. This has now been fixed.
================(Build #3714 - Engineering Case #533600)================
When using a derived table in a remote query, if one or more columns from
the derived table were referenced in the WHERE clause of the query, and the
query was going to be processed in full passthru, then the engine would have
returned with a "correlation name not found" error. This problem
has now been fixed.
================(Build #3714 - Engineering Case #533055)================
If a LIKE predicate contained specific forms of patterns, and it referred
to a column contained in an index, then it was possible for the server to
crash when opening the statement containing the LIKE predicate. This has
been fixed.
================(Build #3714 - Engineering Case #530710)================
Executing an INSERT or an UPDATE that fails, could, in some cases, have caused
the database server to fail an assertion. A specific assertion that was
likely to have been seen as a result of this failure was: 201501 - "Page
for requested record not a table page or record not present on page."
For this problem to have occurred, the failing INSERT or UPDATE must have
been to a table that had blob columns containing data less than approximately
one database page in length, but longer than the column's inline amount.
This has now been corrected.
This has been fixed.
================(Build #3712 - Engineering Case #533013)================
When executed from within a login procedure, a BACKUP statement, an external
function call, a web service request, a Java request or a remote procedure
call could have caused the requesting connection to hang indefinitely. In
certain cases, such as when executing a BACKUP statement, this hang could
eventually cause other connections to hang as well. This has been fixed.
================(Build #3712 - Engineering Case #532819)================
If a START DATABASE statement or an attempt to autostart a database on an
already-running server, failed due to an alternate server name conflict,
a second attempt to start the database with the same (conflicting) alternate
server name would have succeeded when it should have failed as well. This
has now been fixed.
================(Build #3712 - Engineering Case #532796)================
If a database was started with an alternate server name on an already-running
server, in rare cases, subsequent TCP connection attempts to the server may
have failed. This has been fixed.
================(Build #3709 - Engineering Case #532850)================
Executing a CREATE EXISTING TABLE statement to create a proxy table when
the remote table has an unsupported index on it, could have caused the statement
to fail. This problem has now been fixed.
================(Build #3708 - Engineering Case #532668)================
The SORTKEY function did not allow the first parameter to be BINARY if the
second parameter (the collation id) was not an integer. Similarly, COMPARE
did not allow either of the first two parameters to be BINARY if the third
parameter (the collation id) was not an integer. For example, SORTKEY( cast(
'a' as binary ), 'dict' ) would have reported the error "Cannot convert
weCHAR values.
================(Build #3708 - Engineering Case #532626)================
In certain rare situations, it was possible for the server to hang when starting
a database. This has been fixed.
================(Build #3708 - Engineering Case #532280)================
The server could have behaved erroneously when new procedures were created
from within event handlers, or after having executed the SETUSER statement.
Although rare, in the worst case a user could have been allowed to be dropped
while still connected. These problems have been corrected so that the server
now behaves correctly.
================(Build #3708 - Engineering Case #532276)================
A large query containing a 'WITH' clause could have crashed the server. The
server was failing to recognize a SYNTACTIC_LIMIT error for such queries.
This has now been corrected.
================(Build #3707 - Engineering Case #532254)================
A server that had registered itself with LDAP tried could have crashed when
trying to start a database using an alternate server name, if an error occurred
in reading the saldap.ini file. This has been fixed.
================(Build #3706 - Engineering Case #532185)================
SQL Anywhere keeps track of dependencies of views on other views and tables.
For view definition queries that involve more than two UNION, EXCEPT or INTERSECT
branches and/or sub-queries, the server's computation of the dependency information
could have been incorrect, leading to erroneous behaviour. This has beed
fixed so that the server now computes the dependency information correctly.
Note, any existing views compiled with an older version of the server will
continue to have potentially incorrect dependency information in the catalog.
Existing views can be made to have the correct dependency information by
being recompiled, either implicitly during a DDL operation on one of the
referenced tables, or explicitly.
================(Build #3706 - Engineering Case #532109)================
A query containing a derived table, or a subquery with a OUTER JOIN whose
ON condition references tables from outside the query block, may have returned
incorrect results. This has been fixed.
In the following example, the ON condition of the LEFT OUTER JOIN in the
EXISTS subquery references the table 't1', which is a table used in the FROM
clause of the main query block. This query may return incorrect result for
some instances of the database.
select * from t1
where exists(
select 1
from t4
left outer join t2 on ( t2.c1 = t4.c1 and t2.c1 = t1.c1 )
where t4.c1=1
)
================(Build #3706 - Engineering Case #532102)================
The changes made for Engineering case 483518 caused the server to crash if
a call to the graphical_plan() function was made within an event handler.
This has now been corrected.
================(Build #3706 - Engineering Case #531334)================
On Linux systems, starting a database that is stored on a non-tmpfs based
ramdisk could have failed. This has been fixed.
Note, a work around is to use a tmpfs based ramdisk, or start the server
with the -u (use buffered disk I/O).
================(Build #3706 - Engineering Case #530920)================
During diagnostic tracing with at least one tracing level of type optimization_logging_with_plans,
an incorrect row size could have been reported for a table that had been
created immediately before the statement referencing the table was executed.
This has been fixed.
================(Build #3706 - Engineering Case #530776)================
When a database created by a newer version for SQL Anywhere (eg version 11)
was started by an older version of SQL Anywhere (eg version 10), the server
would have read some pages, other than the definition page, from the database
before verifying that the capabilities of the file and server were compatible.
The server now tests the capability bits of a database file against the capabilities
supported by the server sooner in the database startup process. There are
no known user-visible effects caused by checking the capabilities later,
other than when starting an encrypted database created by newer software,
the server will no longer prompt for an encryption key before reporting that
the capabilities are incompatible.
================(Build #3702 - Engineering Case #531295)================
The server could have crashed while optimizing complex queries. This has
been fixed
================(Build #3698 - Engineering Case #538480)================
In rare circumstances, the server could have crashed while disconnecting
if -zl, -zp, sa_server_option( 'RememberLastStatement', 'YES' ) or sa_server_option(
'RememberLastPlan', 'YES' ) were used. This has been fixed.
================(Build #3698 - Engineering Case #500507)================
If a proxy table referred to a table in a DB2 database and had a BLOB column,
then attempting to insert data into the BLOB column would have caused syntax
errors. Note that this problem did not exist if the column was instead defined
as "LONG VARCHAR FOR BIT DATA", which more closely mapped to the
SA "long binary" datatype. Nevertheless, the problem with inserting
into DB2 BLOB columns has now been fixed.
================(Build #3697 - Engineering Case #530576)================
The http_encode() function was not encoding the 0x1f character. This has
been fixed. This character is now encoded to "%1F".
================(Build #3697 - Engineering Case #530273)================
If there were more than 100 connections actively using Java in the database
support at the same time, then the JVM would have crashed, or the server
could have hung. This problem has now been fixed.
================(Build #3696 - Engineering Case #530339)================
If an application had a connection enlisted in Microsoft's Distributed Transaction
Coordinator (DTC), and it issued a commit on the distributed transaction,
then there was a chance the server would have hung when a request to enlist
in the DTC came in at the same time as the two phase commit. This problem
has now been fixed.
================(Build #3696 - Engineering Case #530318)================
When diagnostic tracing was enabled, with PLANS or PLANS_WITH_STATISTICS
as the tracing type, some plans or cursor information could have failed to
have been saved. Alternatively, some plans that did not fit the timing cut-off
in ABSOLUTE_COST or RELATIVE_COST_DIFFERENCE conditions, could have benn
incorrectly saved. These problems have now been fixed.
================(Build #3695 - Engineering Case #530039)================
When a connection is unexpectedly terminated, a message is displayed in the
server console containing the AppInfo string for the client. This message
was incorrectly being truncated at 255 bytes. This has been fixed.
================(Build #3695 - Engineering Case #529852)================
If a client disconnected at a specific time interval, possibly due to liveness
timeout, during a positioned update statement, then the server could have
failed assertion 101704 - "Unexpected state in positioned update error".
This has now been fixed.
================(Build #3695 - Engineering Case #529201)================
Under rare circumstances, the server could have crashed if a DML statement
was executed while diagnostic tracing was being stopped by the DETACH TRACING
statemented. This has been fixed.
================(Build #3695 - Engineering Case #500074)================
A join that included tables containing long strings (roughly one database
page or greater in size) may have taken a disproportionate amount of time
to complete, or to respond to a cancel. This delay would have increased
as the number of rows containing long string data increased. This has been
fixed.
================(Build #3694 - Engineering Case #528838)================
The server could have crashed, or failed assertion 201501 ("Page for
requested record not a table page or record not present on page"), when
inserting rows into a table with a clustered index that previously had rows
deleted from it.
================(Build #3693 - Engineering Case #529055)================
Attempts to connect to a database using connection strings containing a database
name longer than 250 bytes, would have failed, even if the database name
matched in the first 250 characters. This has been fixed.
================(Build #3692 - Engineering Case #528627)================
Selectivity estimates could have been incorrectly updated if a query with
predicate of the form "T.x <op> expr( T.y ) or expr( T.y ) <op>
T.x" was executed. These
incorrect predicate selectivity estimates could have lead to lower quality
query access plans. When "expr" was an expression referencing one
(or more) columns of table with correlation name T, selectivity estimates
could have be updated with the assumption that "expr(T.y)" was
constant for the duration of the query. This has been fixed.
For example:
SELECT t.x, t.y, t.z
FROM tx AS t
WHERE t.x <= t.y + 1
================(Build #3691 - Engineering Case #523745)================
The server automatically maintains columns statistics, in the form of histograms,
to capture the data distribution. Under some specific circumstances, the
server could have applied incorrect modifications to the automatically maintained
statistics, resulting in potentially poor query access plans. Symptoms of
this problem would often have been the presence of duplicate boundary values
in the result set of system procedure sa_get_histogram(). This has been corrected
by an update to the server's histogram maintenance algorithms.
================(Build #3691 - Engineering Case #500900)================
A validation check was missing when a SOAP request was made through a DISH
service endpoint. This has been fixed.
================(Build #3691 - Engineering Case #500837)================
In specific circumstances, the server could have crashed while processing
a hash join. This has been fixed.
================(Build #3691 - Engineering Case #495700)================
As of Engineering case 408481, exists() subqueries were not flattened during
rewrite optimizations if the subquery contained more than two tables in the
FROM clause, and it was not known if the subquery returned at most one row.
Now, for a subquery to not be flattened, it must also contain other nested
subqueries.
================(Build #3690 - Engineering Case #528359)================
When a hash-based execution strategy was used for INTERSECT, EXCEPT, or a
semi-join, it was possible for the wrong answer to be returned in specific
situations when the operator used a low-memory execution strategy. This has
been fixed.
================(Build #3690 - Engineering Case #528358)================
If a query plan had a Hash Group By that used a low memory strategy, and
there was a SUM() aggregate over a NUMERIC or DECIMAL value, then the wrong
answer could have been returned. This has been fixed.
================(Build #3690 - Engineering Case #500656)================
The server may have returned an invalid numeric value when a value of type
Double was cast to a numeric type that was too small. This has been fixed
================(Build #3689 - Engineering Case #500700)================
Applications using ODBC.Net could not have been executed with the Runtime
Server. The error displayed when attempting to execute a query would have
been:
Triggers and procedures not supported in runtime server
This has now been corrected.
================(Build #3689 - Engineering Case #500653)================
If a table participated in a publication, it was possible for the server
to have failed assertion 100905 ("Articles on the table use do not match
those on the table definition") while processing an UPDATE statement
that affected the table. This has been fixed.
================(Build #3689 - Engineering Case #499956)================
If queries used an index, where the index keys were long, then in some situations,
the server could have crashed. This has been fixed.
================(Build #3688 - Engineering Case #500522)================
When running the server on multicore Intel x64 hardware, with 64 bit operating
systems, the server could have missed opportunities for optimization and
intra-query parallelism. This has been fixed. Note, using 32 bit software
on these same platforms did not exhibit these problems.
================(Build #3688 - Engineering Case #500517)================
If the AppInfo connection parameter contained non-ASCII characters, and the
database charset and the OS charset were different, the non-ASCII characters
would have appeared mangled when printed out to the console as part of an
abnormal disconnection message, or when the connection was being established
if the -z switch was used. This has been fixed.
================(Build #3688 - Engineering Case #500501)================
If one of the encryption componentss (ie dbecc10.dll, dbrsa10.dll, dbfips10.dll)
became corrupted, it was possible for the server to return an error the first
time it was used, and then crash the second time. This has been fixed.
================(Build #3688 - Engineering Case #500302)================
A SQL Anywhere SOAP SERVICE Soapfault was returned with the HTTP response
header "Content-Type: text/html" when the given service had encountered
a protocol or parse error. This has been fixed; a "Content-Type: text/xml"
is now returned.
================(Build #3688 - Engineering Case #500092)================
Inserting a string of length L, where L was slightly less than 8*db_property('pagesize')
(i.e. within about 13 bytes), into a compressed column could have caused
a server crash, or assertion 202000 ("Invalid string continuation at
row id ..."). This has been fixed.
================(Build #3688 - Engineering Case #499484)================
The comparison of two strings may not have worked incorrectly when using
the UCA collation. For this problem to have occurred, the strings must have
been longer than 1024 bytes, have been linguistically equal in the characters
of the first 1024 bytes, but have been binary distinct.
For example, on a UCA database:
select if repeat('a', 1024) || 'a' = repeat('a', 1024) ||'b'
then 'equal'
else 'not equal'
endif
would return 'equal'. This has been fixed.
================(Build #3687 - Engineering Case #500128)================
The server could have crashed with a division by 0 error when specific repeat()
expressions appearred in a query. For this problem to have occurred, the
expression in the repeat() function must have specified an NCHAR string literal
of zero-length (i.e., N''). The query must also compare the strings in some
fashion.
For example:
select repeat(N'',row_num) x
from rowgenerator
group by x
This has now been fixed.
================(Build #3687 - Engineering Case #500017)================
The ATTACH TRACING statement could have failed if the database character
set was different from the OS character set, and the server name and/or database
name of the tracing database contain multibyte characters. This has been
fixed for cases when conversion between the OS character set and database
character set is not lossy.
Note that using multibyte characters in server and/or database names is
not recommended for profiling, especially if the tracing database is started
on a different physical server than the database being profiled.
================(Build #3686 - Engineering Case #499897)================
When a remote procedure call was made, if it contained a bigint parameter
then an incorrect value for the bigint parameter would have been sent to
the remote server. This problem has now been fixed.
================(Build #3686 - Engineering Case #498876)================
Under very rare conditions, the server could have hang during a rowscan over
a table that had character or binary data. For this problem to have occurred,
the data must have been longer than the column's INLINE amount, which defaults
to 256 bytes, for both BINARY and CHAR types. This has been fixed.
================(Build #3685 - Engineering Case #496450)================
Attempting to execute an ALTER TABLE statement to add a computed column that
involved a Java call would have caused the server to either fail to start
the Java VM, or to hang. This problem has now been fixed.
================(Build #3685 - Engineering Case #485293)================
If diagnostic tracing was started using the Tracing Wizard in Sybase Central,
and the connection string specified contained the tracing database's file
name, the tracing database would not have been started, and the ATTACH TRACING
statement would have failed. This has been fixed so that if the database
file name is specified, the database will be started on the current server.
The location of the database should be given relative to the database server
location.
================(Build #3684 - Engineering Case #499252)================
HTTP response headers were not set as expected if the SERVICE made a nested
call to an inner procedure from where the sa_set_http_header() system procedure
was called. Headers cannot be set by the inner procedure because the server
has already sent the headers prior to the call. This has been fixed so that
calling sa_set_http_header, when the HTTP headers have already been sent,
will now result in a SQL error: Invalid setting for HTTP header.
================(Build #3684 - Engineering Case #499250)================
When creating or altering a SERVICE that required a statement, a check was
missing to ensure that the service was properly configured with either a
SELECT or a CALL statement. A misconfigured service would have always returned
a '400 Bad Request' HTTP status for all requests. This has been fixed.
================(Build #3684 - Engineering Case #499233)================
The server may have got into an infinite loop trying to convert an invalid
date, time or timestamp value to an integer. This has been fixed.
================(Build #3683 - Engineering Case #498860)================
If a database was backed up, and then an attempt to create a proxy table
was made shortly after the backup finished, then there is a likely possibility
that the server would have hung. This problem has now been fixed.
================(Build #3683 - Engineering Case #498198)================
In certain cases, altering a string column could have produced orphaned blobs.
These orphans would have shown up as errors when a validation (dbvalid or
VALIDATE statement) was run on either the database or the table. For this
problem to have occurred, the table must have contained at least two string
columns, with (at least) one column containing "long" strings,
i.e., strings larger than approximately one database page. If a column other
than the one containing long strings was altered to a size smaller than its
inline amount, and then altered to a size larger than its inline amount,
the long strings would have become orphaned. Note that the default inline
amount for a CHAR column is 256. This has now been fixed.
For example:
CREATE TABLE test(col1 long varchar, col2 char(1000))
// ... insert long data into col1
ALTER TABLE TEST MODIFY col2 CHAR(10)
ALTER TABLE TEST MODIFY col2 CHAR(1000)
VALIDATE TABLE TEST
A workaround to the validation failure is to rebuild the database using
dbunload or the Unload wizard in Sybase Central.
================(Build #3682 - Engineering Case #498727)================
Starting the server with the command line option -x none (or -x shmem), and
no -xs option, causes the server to only listen for connections requests
over shared memory. However, if the -z switch was also used in this case,
messages about TCP initialization were still displayed. This has been corrected
so that these messages will no longer be displayed in this case.
================(Build #3682 - Engineering Case #498583)================
The server would have rejected a SOAP request containing XML comments with
a '400 Bad Request' status. This has been fixed. Comments are now ignored
by the server.
================(Build #3682 - Engineering Case #498568)================
If a user U1 with DBA authority granted permissions on a table or view to
another user U2, and then DBA authority was revoked from U1, U2's permissions
should have been affected immediately. Instead, this change did not appear
until the database was restarted. This has been fixed.
================(Build #3682 - Engineering Case #498529)================
A cursor using a temporary table and a procedure call could have resulted
in a server crash. This has been fixed.
================(Build #3682 - Engineering Case #498393)================
When sending diagnostic tracing data to a remote database, if the tracing
database was stopped, or the connection to it was interrupted, before diagnostic
tracing was stopped, the database server could have crashed. This has been
fixed.
================(Build #3682 - Engineering Case #498204)================
The server could have crashed shortly after starting a database with dbspaces
that had no deletes or updates performed on them. This has been fixed.
================(Build #3682 - Engineering Case #496719)================
When rebuilding, by unloading and reloading, a version 9.0 database that
had Remote Data aAcess servers defined, there was a possibility that the
reload would have failed with a "capability 'aes_encrypt' not found"
error. This problem has now been fixed.
The workaround is to edit the reload script and change all occurrences of
'aes_encrypt' to 'encrypt'.
================(Build #3678 - Engineering Case #496241)================
If application profiling data was stored in an external tracing database,
trigger names may not have been attached to the statements executed within
the triggers. This has been fixed.
Note that the statement data was still saved in the tracing database - it
was just not linked to the trigger, and was displayed in the Profiling mode
in the Details tab with
"Procedure or Trigger Name" set to NULL.
================(Build #3677 - Engineering Case #497880)================
The server could have failed assertion 202001 while executing a query, if
a temporary table was scanned that contained a string longer than the database
page size. This has been fixed.
================(Build #3677 - Engineering Case #497121)================
In some cases, actual node statistics were not reported for 'hash filter'
and 'hash filter parallel' in graphical plan with detailed statistics, including
node statistics.
Also, in some cases actual statistics were not reported for recursive unions.
This has been fixed.
================(Build #3676 - Engineering Case #497504)================
If an application that was using Remote Data Access support executed a remote
query that was very complex, then there was a possibility that the server
would have crashed. This problem has been fixed.
================(Build #3676 - Engineering Case #497502)================
If an application that was using Java in the database support spawned additional
threads that were still running when the database was shut down, then the
JVM would havel continued running until these additional threads shut down.
This problem has now been fixed.
Note, if the application needs these threads to be notified of the shutdown,
then the application must register a shutdown hook with the Java VM.
================(Build #3676 - Engineering Case #497467)================
The unparsing of an alias was incorrect in some cases. The alias was represented
as a '*' in an unparsed statement. This may have been observed in plans generated
when using the -zx server command line option to log expensive queries. This
has been fixed.
================(Build #3675 - Engineering Case #497264)================
In rare cases, an HTTP request with a SessionID may have caused the server
to crash. This has been fixed.
================(Build #3675 - Engineering Case #497105)================
If request log filtering by database was enabled using:
call sa_server_option('RequestFilterDB', <db-id> )
query plans from statements executed on other databases on the same server
would still have appeared in the request log. This has been fixed.
================(Build #3675 - Engineering Case #496526)================
When inserting rows using an opened cursor, rather than using an INSERT statement,
computed columns would not have been properly evaluated. This has been fixed.
================(Build #3672 - Engineering Case #496071)================
The server uses column statistics in order to estimate the number of rows
that satisfy a given predicate. The column statistics are maintained as histograms
that attempt to capture the data distribution of values in the database for
a given column. The server could have incorrectly estimated the selectivity
of BETWEEN predicates under certain circumstances. Note that a BETWEEN predicate
might be inferred by the server if the query contains appropriate conjunctive
predicates. As an example, "c1 >= 5 AND c1 <= 10" is semantically
equivalent to "c1 BETWEEN 5 AND 10". This estimation problem has
been resolved.
================(Build #3670 - Engineering Case #496429)================
Graphical plans for Group By queries that were executed in parallel, did
not report group by expressions or aggregates in the tooltip and details
pane for GroupBy below Exchange.
This has been fixed.
================(Build #3670 - Engineering Case #496114)================
Graphical plans with detailed statistics that contained an index scan would
not have shown values for the statistic CacheReadIndInt, and the values for
CacheReadIndLeaf would have included internal and index leaf page reads.
This has now been fixed.
================(Build #3670 - Engineering Case #495960)================
In very rare and timing-dependent situations, a cancelled backup could have
caused the request, and ultimately the server, to hang. Typically, for this
problem to have occurred, the cancel would have had to occur quite quickly
after the backup began. This has been fixed.
================(Build #3670 - Engineering Case #495929)================
Attempting to execute a query that references a proxy table that contained
nchar or nvarchar columns, may have failed assertion 106808. The server was
incorrectly setting the semantics to byte length, instead of character length,
when describing nchar and nvarchar columns in proxy tables. This problem
has now been fixed.
================(Build #3669 - Engineering Case #496113)================
Under rare circumstances, during diagnostic tracing with the 'plans_with_statistics'
tracing level set, query plan information for a DML statement could have
been missing. In such cases, viewing the plan for the statement in the Profiling
Mode would not be possible. This has been fixed.
================(Build #3669 - Engineering Case #496094)================
Using the REWRITE() function on a some forms of queries could have resulted
in the server going into an infinite loop. This has been fixed.
================(Build #3669 - Engineering Case #496068)================
If a DML statement modified a table, and also referred to the table indirectly
through a non-inlined procedure call, then anomalies could have occurred.
This has been fixed by forcing a work table for any DML statement that references
a procedure call.
================(Build #3669 - Engineering Case #496061)================
If a BEFORE trigger changed an update back to the original value of the row,
then the update would still have been logged, even though it was a no-op.
This is now only done if a resolve trigger is fired, which matches the behaviour
of previous versions.
================(Build #3668 - Engineering Case #487164)================
The Index Consultant may have caused the server to crash when a complex query
was analyzed. For example, a query with a subselect in the select list. This
has been fixed.
================(Build #3667 - Engineering Case #495962)================
If an application issued a statement like "DELETE FROM t WHERE c = @v",
and the table t was a proxy table and @v was a variable of type nchar, nvarchar
or long nvarchar, then the query would have failed with a "not enough
host variables" error. This problem has now been fixed.
================(Build #3667 - Engineering Case #495956)================
Executing the statement ALTER DATABASE <dbfile> MODIFY LOG ON, where
no transaction log name was specified, would have disabled transaction logging
for that database, equivalent to specifying "... LOG OFF". This
has been fixed. Including a transaction log filename in the statement would
have behaved correctly.
================(Build #3667 - Engineering Case #495872)================
If a query contained a "GROUP BY GROUPING SETS" or CUBE clause,
it was possible for the server to fail the query with an assertion failure
such as: 102501 "Work table: NULL value inserted into not-NULL column".
The problem would only have occurred for specific query access plans if there
were no rows input to the GROUP BY. This has now been fixed.
================(Build #3665 - Engineering Case #495396)================
If a client connection went away (eg. the client application crashed), it
was possible for the server to have crashed. This was very rare and timing-dependent.
It has now been fixed.
================(Build #3664 - Engineering Case #495574)================
The server could have crashed, or failed assertions, when scanning many values
from an index containing many and/or wide columns. This has now been fixed.
================(Build #3664 - Engineering Case #495506)================
SQL statements that are executed both before and after a SETUSER statement
may have ncorrectly referred to the wrong user's objects.
For example, if connected as user u1 and the following executed:
select * from t;
setuser u2;
select * from t;
then the second SELECT could have incorrectly returned the results for u1.t
instead of u2.t. This has been fixed. Setting the database option max_client_satements_cached
to 0 will workaround this problem.
================(Build #3664 - Engineering Case #489542)================
In very rare situations, the server could have hung while trying to drop
a Remote Data Access connection. This problem has now been fixed.
================(Build #3664 - Engineering Case #452798)================
In very rare cases, a crash during recovery, or killing the server during
recovery, may have resulted in assertion 201502 - "Inconsistent page
modification counter value" on subsequent attempts to recover. This
has now been fixed.
================(Build #3663 - Engineering Case #492783)================
If an application using the iAnywhere JDBC driver called ResultSet.getDouble()
to fetch a numeric value, then there was a chance the JDBC driver would have
thrown an "invalid numeric string" exception. This problem would
only have happened if the application was using the Java print service and
the default locale was a European one. Note that calling ResultSet.getBigDecimal()
did not have the same problem, hence calling ResultSet.getBigDecimal().doubleValue()
is a workaround for this problem. The original problem has now been fixed.
================(Build #3662 - Engineering Case #495236)================
There was a chance that the server would have crashed when making the connection
to a remote server using one of the JDBC based Remote Data Access classes.
This problem has been fixed.
================(Build #3662 - Engineering Case #495231)================
Executing a COMMENT ON INTEGRATED LOGIN statement could have caused the server
to crash, or to fail an assertion, if executed concurrently with other commands.
This has been corrected.
================(Build #3661 - Engineering Case #494983)================
If the last page of the transaction log was only partially written, perhaps
due to a power failure, it was possible that the database would not have
been able to recover on startup. It is most probable that this would have
occurred on Windows CE. The likely error would have been a failure to validate
the checksum on the page. This has been fixed.
================(Build #3660 - Engineering Case #494708)================
It was possible for the server to fail assertions 201866 - "Checkpoint
Log: Page 0x%x is invalid" or 201864 - "Checkpoint log: Invalid
page number on page 0x%x", for a database containing a corrupted page
in the checkpoint log. This could have occurred in cases where it was safe
for the server to ignore the corruption and recover the database. This has
been fixed. The assertions never occurred in databases that didn't contain
corruption.
================(Build #3659 - Engineering Case #494449)================
Under rare circumstances, the server could have crashed while executing a
trigger defined for multiple events. This has been fixed.
================(Build #3659 - Engineering Case #494431)================
An error could have incorrectly been given when converting a string such
as '+123' to one of the following types: INT, UNSIGNED INT, BIGINT, UNSIGNED
BIGINT. A redundant '+' is permitted at the beginning of the string when
converting to a number. Prior to the changes for Engineering case 392468
(10.0.1 build 3476), the conversion incorrectly gave an error for BIGINT
and UNSIGNED BIGINT, but the correct behaviour was given for INT and UNSIGNED
INT. In 10.0.1 build 3476 and later, the error was generated incorrectly
for all of the above listed types. This problem has been fixed.
================(Build #3659 - Engineering Case #494319)================
If the Unload utility was used to perform an internal unload (-ii or -ix
command line option) from a Windows client while connected to a Unix server,
unloaded data files would have been created with a backslash character in
the file name instead of being placed in a sub-directory of the server's
current working directory. Furthermore, the generated reload.sql script referenced
these data files using forward slashes, making it unusable without modification.
A work around is to append a forward slash to the end of the directory name
passed to the dbunload utility.
================(Build #3658 - Engineering Case #494448)================
Catalog information about a materialized view could have been inaccurate
following execution of a REFRESH MATERIALIZED VIEW statement that had failed.
This has been fixed.
================(Build #3657 - Engineering Case #494310)================
If a materialized view was dropped as the result of dropping a user, and
this was done concurrently with other database requests, the server could
crashed, or failed assertions. Database corruption was also possible. This
has now been fixed.
================(Build #3657 - Engineering Case #494020)================
In very rare cases, the server may have crashed on recovery, or failed to
recover with other errors, most likely related to database page access.
This problem was only possible if the server crashed in the midst of a checkpoint.
This has now been fixed.
================(Build #3657 - Engineering Case #491787)================
If a server was running in a high availability mirroring system and a client
connection was cancelled or dropped, the server could have crashed. This
has been fixed.
================(Build #3656 - Engineering Case #493744)================
In the following situation:
1) A procedure call in the FROM clause consists of a single SELECT statement
and nothing else
2) The FROM clause inside the procedure from 1) also consists of a single
SELECT and nothing else
then the procedure from 2) may be looked up in the context of the current
connection rather than the owner of the procedure in 1). This is fixed.
Note that this is not a security hole since the incorrect lookup is done
in the context of the current connection.
================(Build #3655 - Engineering Case #493757)================
The server could have crashed when inserting a row into a wide clustered
index (one on many columns, or on long strings). This has now been fixed.
================(Build #3655 - Engineering Case #493730)================
In rare, timing dependent cases, the server could have hung on multi-processor
systems with one processor at 100% usage when iterating through connection
handles (for example by using sa_conn_list or sa_conn_info). This was extremely
unlikely on single processor systems, or on a server that had a low rate
of connects and disconnects. This has been fixed.
================(Build #3654 - Engineering Case #493729)================
A problem introduced by changes for Engineering case 489871 made it possible
for database recovery to fail with databases using the -m switch to truncate
the transaction log at each checkpoint. This was not an issue if -m was
not being used. This has been fixed.
================(Build #3654 - Engineering Case #493715)================
If a database requires recovery, executing "START DATABASE {database
name} FOR READ ONLY" would fail with the error "Unable to start
specified database: unable to start database {database name}". This
has been fixed; the error message will now read "Unable to start specified
database: not expecting any operations in transaction log".
================(Build #3652 - Engineering Case #493217)================
In certain rare situations calls to the MOD() function with NUMERIC or DECIMAL
arguments, could have caused the server to crash, or to report an unexpected
error. This has been fixed.
================(Build #3652 - Engineering Case #491010)================
The database option database_authentication defined in saopts.sql, or authenticate.sql,
could have silently failed to have been set during a create or upgrade of
a database. Statement in the scripts used during a create or update of a
database were ignored if either the 'go' terminator was not lowercase, or
an end of file was reached with no 'go' following the statement. This has
been fixed.
================(Build #3651 - Engineering Case #493096)================
When a server has more than 200 concurrent connections, the liveness timeout
should be automatically increased by the server to avoid possible dropped
connections. This was not being done until the server had at least 1000 concurrent
connections. This has been corrected.
================(Build #3651 - Engineering Case #493071)================
The server could have crashed if a connection was attempted at the same time
as a connection was in the process of disconnecting. The likelihood of this
occurring would have been extremely rare due to the very small timing window.
This has now been fixed.
================(Build #3651 - Engineering Case #493049)================
If the very first request to a remote server was executed by an user event
at the time the server was shutting down, or during a server startup that
fails, then the server may have crashed. This has been fixed.
================(Build #3649 - Engineering Case #492540)================
If the last transaction log page received by a server acting as database
mirror was completely filled, and the primary server was then shut down,
the mirror could have failed to start. This has been fixed.
================(Build #3649 - Engineering Case #492348)================
If an application had more than one CallableStatement open on the same connection,
then there was a chance that closing the CallableStatements would have caused
a hang in the application. It should be noted that the problem does not exist
with Statement and PreparedStatement objects. This problem has now been fixed.
================(Build #3649 - Engineering Case #492347)================
In certain conditions, executing statements with the ARGN() fuction could
have caused the server to crash. This has been fixed.
================(Build #3649 - Engineering Case #492332)================
Materialized view maintenance could result in assertions and server crashes
if there were other active connections (including internal connections).
This has now been fixed.
================(Build #3649 - Engineering Case #492227)================
The server may have crashed if an ALTER TABLE statement attempted to rename
a primary key or unique key column, and there already existed a foreign key
with referential action for this column. This has been fixed.
================(Build #3649 - Engineering Case #492188)================
In rare circumstances, the server could have crash when attempting the execute
an external Java procedure. This has now been fixed.
================(Build #3648 - Engineering Case #492353)================
In rare cases, the server could have crashed on shutdown if the cache priming
page collection was enabled. Page collection is enabled by default, or if
-cc or -cc+ were provided on the command line. This has now been fixed. The
workaround is to use -cc- to disable cache priming page collection.
================(Build #3648 - Engineering Case #492346)================
When using Java in the database, a method that called System.out.println
with a very long string would very likely have cause the client application
to hang. This problem has now been fixed.
================(Build #3648 - Engineering Case #492302)================
A query that involved more than one "Remote Procedure Call" in
the FROM clause, could have caused the server to crash.
An example of such a query is:
SELECT col1
from remote_procedure_1()
where col2 = (select c1 from remote_procedure 2())
This problem has now been fixed.
================(Build #3647 - Engineering Case #489871)================
There were several problems possible when the -m server command line option
was used to truncate database transaction log files at checkpoint. Some
of these problems were, but were not limited to, assertions indicating that
the log could not be deleted or restarted while a virus scanner or disk defragmenter
was accessing the log file; and occasionally having zero byte transaction
log file remaining after a system failure. These problems should no longer
occur as the transaction log file is no longer deleted and recreated at checkpoint
time when the -m option is being used. Instead the file gets truncated to
one page in size and then continues to be used. A side effect of this change
is that there will be a one page log file remaining after a successful shutdown
of a database, instead of no log file.
================(Build #3645 - Engineering Case #491910)================
In rare cases, concurrent execution of DML and DDL statements could have
crashed the server. This has now been fixed.
================(Build #3644 - Engineering Case #491180)================
On Windows CE devices, in rare cases it was possible for a database stored
on a flash storage memory to not recover after the device shutdown abnormally.
Flash storage memory includes flash memory cards and the standard storage
memory on Windows Mobile 5 and 6 devices. This has been fixed.
================(Build #3643 - Engineering Case #491380)================
If a statement, other than SELECT, INSERT, UPDATE, or DELETE, used a subselect
expression that returned a NUMERIC or DECIMAL data type, then the subsequent
operations using the subselect value could have inappropriately truncated
the numeric result. This has been fixed.
For example, the following sequence could have incorrectly return 104.0
instead of 104.6.
create variable @vnum numeric(20,4)
set @vnum = ( select max(103.5) ) + 1
================(Build #3642 - Engineering Case #491399)================
When the 10.0 version of dbunload was used to unload a pre-10.0 database,
the "unload support engine" (dbunlspt.exe) was spawned with a cache
size equal to 40% of physical memory. For most databases, this is unnecessarily
large. This has been changed so that
dbunlspt.exe will now start with the same default cache size as the server,
and grow no larger than 40% of physical memory.
================(Build #3642 - Engineering Case #491388)================
Updating or deleting from a large table could have caused index corruption.
For this problem to have occurred, a large number of rows (consecutive in
index order) needed to be updated or deleted. This has been fixed.
================(Build #3641 - Engineering Case #491267)================
The server could have crashed when there were many short transactions on
a busy server. This was more likely to have occurred when running on Unix
system and multiprocessor machines. It was not likely to have occurred when
runnig on a single processor Windows machine. A race condition has been corrected.
================(Build #3641 - Engineering Case #491108)================
The ALTER and DROP TABLE statements can cause checkpoints to happen under
certain circumstance. If these statements were executed on tables, where
the table data had not changed since the last checkpoint, the server did
a checkpoint anyway. These checkpoints caused the server to do unnecessary
serialized work, and could have caused inefficiencies. The problem was most
likely to be observed when large amounts of schema changes were being carried
out, e.g., during a database schema reload. This has been changed so that
the server will no longer cause an unnecessary checkpoint to occur.
================(Build #3641 - Engineering Case #491015)================
A server with AWE enabled (ie -cw) could have crashed when running a database
containing encrypted tables. This has been corrected.
================(Build #3641 - Engineering Case #463311)================
The server keeps track of the dependencies of a view on other views and tables.
When the schema of a table or view is modified by a DDL statement, the server
automatically recompiles any existing views that reference the table or view
being modified. If a dependent view no longer compiles as a consequence of
the schema modification of a referenced object, the dependent view is marked
as invalid and is no longer available for queries. Once a view becomes invalid,
the server prohibits it's definition from being modified by means of the
ALTER VIEW statement, requiring the view to be dropped and recreated with
a definition that can be successfully compiled. An example of this scenario
is provided below:
create table t1 (c1 integer, c2 integer);
create view v1 as select c1, c2 from t1;
alter table t1 drop c1;
alter view v1 as select c2 from t1;
The server will now allow the use of the ALTER VIEW statement on an invalid
view so that its definition can be corrected without having to drop the view
first; or to disable it.
================(Build #3640 - Engineering Case #491121)================
When run on Windows systems, the server's "about" box would not
have opened when selecting "About SQL Anywhere..." after right
clicking on the system tray icon. This has now been fixed.
================(Build #3640 - Engineering Case #482093)================
If prior to a database going down dirty, a materialized view was refreshed
by a connection with the option isolation_level='snapshot' set, or with SNAPSHOT
isolation specified for a REFRESH statement, and the no checkpoint occurred
between the REFRESH statement execution and the database going down, the
database would have failed to recover. This has now been corrected.
================(Build #3639 - Engineering Case #490930)================
Under very rare circumstance, and likely with heavy concurrency, the server
could have crashed. A race condition in the row locking code has been corrected.
================(Build #3637 - Engineering Case #490594)================
In a low memory situation, the Hash Broup By algorithm could have failed
to compute the value of composite aggregates (eg AVG )for some groups. The
value of the composite aggregate was incorrectly set to NULL. This has been
fixed.
================(Build #3636 - Engineering Case #490504)================
If a Hash Group By algorithm did not have enough memory to operate, and it
was converted to a low-memory execution strategy, it was possible for grouped
rows to be returned even though they did not match the HAVING clause specified
in the statement. This has been fixed.
================(Build #3635 - Engineering Case #500007)================
Under rare circumstances, a server running diagnostic tracing could have
crashed if the database that was being profiled contained triggers. This
has been fixed.
================(Build #3635 - Engineering Case #500005)================
During diagnostic tracing, nonvolatile statistics could have been recorded
incorrectly. This has been fixed.
================(Build #3635 - Engineering Case #485488)================
The server could have crashed when requested to create a tracing database
when using the Sybase Central Database Tracing wizard. This would have happened
when the name of the DBA user for the tracing database was the same as a
name of a DBA user in the existing database. The crash has been fixed.
The workaround, and required behaviour with the fix, is to specify a DBA
user name for the tracing database that does not currently exist in the target
database. The wizard has been modified to alert the user to this.
================(Build #3634 - Engineering Case #490192)================
The embedded SQL OPEN statement did not allow any of the snapshot isolation
levels to be specified in the ISOLATION LEVEL clause. This has been corrected.
The workaround is to use the isolation_level option.
================(Build #3634 - Engineering Case #490180)================
If a query used certain types of expressions that used strings, and a parallel
execution strategy was selected by the query optimizer, then the server could
have crashed under certain conditions. This has been fixed.
The problematic expressions include the following:
COMPRESS CONNECTION_EXTENDED_PROPERTY CSCONVERT DATEFORMAT DB_EXTENDED_PROPERTY
DB_ID DECOMPRESS DECRYPT ENCRYPT EVENT_CONDITION EVENT_PARAMETER EXTENDED_PROPERTY
GET_IDENTITY HASH HEXTOINT HTTP_HEADER HTTP_VARIABLE ISDATE ISNUMERIC LIKE
LOCATE NEXT_HTTP_HEADER NEXT_HTTP_VARIABLE NEXT_SOAP_HEADER PROPERTY_NAME
PROPERTY_NUMBER REPLACE REVERSE SOAP_HEADER SORTKEY TO_CHAR TO_NCHAR UNICODE
USER_ID VAREXISTS WRITE_CLIENT_FILE
================(Build #3634 - Engineering Case #490092)================
In rare cases, attempting to create a procedure or event containing the BACKUP
DATABASE statement would have caused a server crash. The crash was due to
an unparsing error, which has now been corrected.
================(Build #3633 - Engineering Case #489917)================
Certain specific forms of statements could have caused the server to crash,
or to report assertion failures 101504, 101514, or 101515. This has been
fixed.
================(Build #3633 - Engineering Case #489889)================
When executing an UPDATE on a remote table with a cursor range, the cursor
range would have been ignored, and all rows would have instead been updated.
For example, executing the following:
UPDATE TOP 2 proxy_t
SET proxy_t.data = 'Hello'
ORDER BY proxy_t.pkey ASC;
would have updated all rows in the table. This problem has now been fixed.
================(Build #3633 - Engineering Case #489444)================
If an application that used Java in the database attempted to call a static
Java method in a class that had constructors, but no constructor with 0 arguments,
then the call would have failed with an InstantiationException. This problem
has now been fixed.
================(Build #3632 - Engineering Case #489600)================
In rare circumstances, after a backed-up copy of a database was started,
or after a database had undergone recovery, the Validation utility (dbvalid)
could have caused the server to fail assert 101412 - "Page number on
page does not match page requested". Even though the server failed the
assertion, the database file was not corrupt, and the database should have
continued to operate normally. This has been fixed.
================(Build #3632 - Engineering Case #489598)================
'Assertion 100904: Failed to redo a database operation' is generated when
the server fails to recover the database based on information stored in the
transaction log. This assertion never included the actual reason for the
recovery failure. The actual error message is now included in the assertion
message. In many cases the cause of the recovery failure was failure to find
a data file that was used in a LOAD TABLE statement. In cases where the data
file had been deleted recovery could not continue. The fact that the file
is missing is now incorporated into the assertion message.
================(Build #3631 - Engineering Case #489443)================
When using Java in the database and attempting to have the server execute
a non-static Java method, a strange NullPointerException method would have
been returned. A proper IllegalAccessException method, indicating that the
method being executed is not static, will now be returned.
================(Build #3631 - Engineering Case #489337)================
A connection attempting to execute an UNLOAD TABLE statement on more than
one table concurrently could have lead to a server deadlock. This could also
have happened when executing the Unload Database utility (dbunload). This
has beed fixed.
================(Build #3631 - Engineering Case #489179)================
Load table could have failed when loading data into a table that contained
a self-referencing foreign key. As of Engineering Case 395054 the wait_for_commit
option was set to 'off' for LOAD TABLE so that errors could be detected immediately.
Now wait_for_commit for LOAD TABLE is set to 'on' if the table has a self-referencing
foreign key, otherwise it is sett 'off' as before.
================(Build #3631 - Engineering Case #488844)================
Unique indexes could have unexpectedly grown in size over time. As well,
the server could have crashed while performing index maintenance. The server
keeps deleted unique index entries until commit or rollback, in order to
avoid having to do key range locking. In some cases the deleted entries were
not being reclaimed properly. This has now been corrected.
================(Build #3631 - Engineering Case #488666)================
In very rare, timing-related circumstances, the server could have appeared
to hang while executing a backup. The backup could have been a server-side
backup or a client-side backup. This has been fixed.
================(Build #3631 - Engineering Case #473559)================
The server could have failed to recover a database, with assertion failures
200502, 201417 or 201418 likely in this situation. This was most likely
to have occurred when the server was running on a Windows CE device, and
the device was reset while the server was active. Databases using checksums
or encryption may be more prone to seeing this problem. While this problem
has been fixed, it's still possible to get these assertions where there is
something wrong with the database.
================(Build #3630 - Engineering Case #488754)================
If a variable of type nchar, nvarchar or long nvarchar was declared, and
then used in a query involving a remote table, then it was likely that the
server would have failed with the error "not enough host variables".
A simple example of a query that would have given this error is:
SELECT * FROM remote_t WHERE c1 = @nvar
In this example, if the table remote_t was a proxy table, and the variable
@nvar was of type nchar, nvarchar or long nvarchar, then the server would
have failed to execute the query with the "not enough host variables"
error. This problem has now been fixed.
================(Build #3629 - Engineering Case #489167)================
If an application used version 9 or earlier client software, and connected
to a version 10.0.1 server, the server could have crashed or an incorrect
character set could have been used. Also, if the application used the CHARSET
connection parameter, the connection would have failed. This has been fixed.
================(Build #3629 - Engineering Case #489152)================
If in error, the operating system date was set to a date in the far future,
some servers and tools that collected feature logging information in sadiags.xml
may have crashed. This has been fixed.
================(Build #3629 - Engineering Case #489072)================
The server may have crashed while executing an image backup if all the files
of the database were no longer accessible. The most likely scenerio for this
problem to occur when the database was started in read-only mode on a network
share, and the network connection was lost. This has been fixed and the BACKUP
statement will now fail with an error.
================(Build #3629 - Engineering Case #488941)================
The OPEN operation for a cursor that used a query that referenced proxy tables
may have caused the server to crash. This would only have happened if the
final cursour type was KEYSET. This has been fixed.
================(Build #3629 - Engineering Case #488514)================
When running on Windows Server 2008 (which has not yet been released by Microsoft),
the server could have crashed while performing an integrated login when the
INTEGRATED_SERVER_NAME was blank. This has been corrected
================(Build #3628 - Engineering Case #488993)================
A very specific form of database corruption could in rare instances have
been undetected by the database validation tools. This has been fixed.
================(Build #3628 - Engineering Case #488857)================
If an UPDATE statement contained a SET clause that assigned a value to a
variable, then the variable could have been assigned a value that had a length
or precision/scale that exceeded the declared domain of the variable. This
would have caused subsequent operations with the variable to use this longer
value. This problem only affected variables of type NUMERIC/DECIMAL or string
types. This problem has now been fixed.
For example:
create variable @text varchar(3);
update T set @text = 'long long string', salary = salary
Previously, the update statement would pass and the value of @text was set
to 'long long string'. Now, the statement fails with an error (provided the
string_rtruncation option has its default value):
Right truncation of string data [-638] ['22001']
================(Build #3628 - Engineering Case #488265)================
SQL Anywhere does not permit direct manipulation of catalog tables. Any attempt
to do so should result in a permission denied error. Under certain circumstances
though, an attempt to perform one of these prohibited operations could have
caused the server to behave erratically or crash. The server will now correctly
report a permission denied error.
================(Build #3627 - Engineering Case #488765)================
Starting the utility database could have caused the server to fail assertion
200500. This has been fixed.
================(Build #3625 - Engineering Case #488680)================
The server could have crashed if it was run with TCP/IP disabled, and diagnostic
tracing was attempted. This has been fixed.
================(Build #3624 - Engineering Case #488410)================
A server running with AWE enabled may have performed poorly, or failed an
out of memory assertion. This has been fixed.
================(Build #3624 - Engineering Case #488406)================
In a very rare situation, attempting to execute a CREATE DATABASE statement
could have resulted in a server crash. This has been fixed.
================(Build #3624 - Engineering Case #488404)================
The value returned for the Connection and Database property QueryCachePages
would have been incorrect. This has now been corrected.
================(Build #3624 - Engineering Case #488350)================
When using the SET OPTION statement to change value of a database option,
the absence of any value signifies a request to delete the option setting
altogether. On the other hand, specifying the empty string ('') is considered
a request to set the option value to be the empty string. However, the empty
string was being treated the same way as the absence of the option value.
Note that the problem is seen only when the SET OPTION statement is executed
through dbisqlc, or another Embedded SQL application that makes use of the
corresponding DBLIB API call. Sending the SET OPTION statement directly to
the server for execution does not exhibit the erroneous behaviour. This has
been fixed so that the server will no longer treat the empty string as a
request to delete the option setting.
================(Build #3624 - Engineering Case #488324)================
When deployed to a Windows Mobile 6 device with the language set to Japanese,
it was not possible to shut down the server once it was started. The menu
was not there to shut down, or get version info. This would have happened
on both Standard and Professional devices, and in the emulators for both
types of device. This has now been fixed.
================(Build #3624 - Engineering Case #488218)================
The Deployment Wizard was failing to create the following two registry entries:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\SQLANY
10.0
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\SQLANY
10.0 Admin
This has been corrected.
================(Build #3621 - Engineering Case #488129)================
A numeric value longer than 80 digits may have been silently truncated to
80 characters when implicitly converted to a string value. One instance where
this could have occurred was from within a column DEFAULT specification,
e.g.:
create table test (col1 numeric(108,38) default 1111111111222222222233333333334444444444555555555566666666667777777777.12345678901234567890123456789012345678)
The value stored in the catalog (the SYSTABCOL "default" column)
would have been truncated to: 1111111111222222222233333333334444444444555555555566666666667777777777.12345678
This has been fixed.
================(Build #3621 - Engineering Case #488094)================
A failing DROP TABLE statement could have caused table pages to be leaked.
This could only have happened if pages had been allocated to the table's
arenas between the last checkpoint and the failed DROP TABLE. Database validation
would not have detected these leaked pages. This has been fixed.
A workaround for this problem is to always issue a checkpoint before attempting
a DROP TABLE that has a chance of failing.
================(Build #3621 - Engineering Case #487972)================
When running the Extraction utility (dbxtract) on a large database, an equally
large database would have been created, even though the amount of extracted
data may have been significantly less. In cases such as this, the majority
of pages in the new database would have been free pages. This has been corrected.
================(Build #3621 - Engineering Case #487783)================
If a database mirroring system was using a mode other than "synchronous",
loss of quorum did not force the primary server to restart and wait for one
of the other two servers to become available. This has been fixed.
================(Build #3621 - Engineering Case #485702)================
The server may have crash while attempting to execute a trigger that had
a syntax error. This has been fixed.
================(Build #3620 - Engineering Case #487847)================
In very rare cases, execution of a DROP EVENT statement to drop a scheduled
event could have crashed the server. This has been fixed.
================(Build #3619 - Engineering Case #487673)================
The changes for Engineering case 471948 introduced a problem such that when
the server was very close to the connection limit, either the limit set by
licensing or the hard-coded limit in the Personal Server, new HTTP connections
may have been rejected incorrectly. This has been fixed.
================(Build #3618 - Engineering Case #487443)================
It was possible for the database administration utilities, or other applications
which made use of the DBTools interface, to have crashed when attempting
to access a file that was in use. This has been fixed.
================(Build #3618 - Engineering Case #486777)================
In rare cases, opening a cursor on a procedure call or batch statement could
have caused a server crash. This has been fixed.
================(Build #3617 - Engineering Case #487520)================
When the server was run on Windows CE devices with the -qw or -qi command
line options, a menu bar containing the menu "Menu" would have
been displayed, but no server window. This has been fixed so that no part
of the server is now visible if the -qw or -qi options are used.
================(Build #3617 - Engineering Case #487507)================
If a 32-bit server was used on a system that had more than 4GB of memory
available, then dynamic cache sizing could have selected cache sizes that
were inappropriately small. Inappropriately small cache selected in this
way would have affected query performance such that they could be very slow,
worse even then if that cache size had been set at startup time. This problem
has been fixed.
Note, this problem can be avoided by using the -ca 0 engine switch to disable
dynamic cache sizing.
================(Build #3617 - Engineering Case #487505)================
On Windows CE, starting a second server while one is already running should
display the existing server, unless the existing server is in quiet mode
(started with the command line option -qi or -qw). After pressing the Hide
button on a server, starting a second server did not display the existing
server. This has been fixed so that the first server is now displayed when
a second server is started, even if the Hide button is pressed.
================(Build #3617 - Engineering Case #487496)================
The selectivity estimates used by the optimizer could have had reduced quality
in some cases. In particular, if a database was used on two platforms with
different endianness, the selectivity estimates could have been wrong immediately
after starting on the other platform. The selectivity estimates would have
gradually improved with use on the new platform, or until a CREATE STATISTICS
command was used to recreate the estimates. This has been fixed.
================(Build #3617 - Engineering Case #487411)================
A keyset cursor is used when there is a possibility that a row could otherwise
be updated multiple times. For example, a keyset cursor is needed if there
are triggers, or if an UPDATE modifies columns in an index used by the UPDATE's
plan. In some situations, an UPDATE statement could have used a keyset cursor
in cases where it was not necessary, thus leading to slower performance.
This has been fixed.
================(Build #3617 - Engineering Case #487142)================
In some specific situations, an UPDATE statement could have failed to complete.
It would continue to execute until cancelled. This has been fixed.
================(Build #3616 - Engineering Case #487335)================
It was possible for an HTTP request to an SA DISH service to hang the server
while consecutively altering services. This has been fixed.
================(Build #3616 - Engineering Case #485054)================
The optimizer attempts to use an existing index for queries involving the
MIN or MAX aggregate function. The cost of this optimization was incorrectly
overestimated for subqueries which could have been evaluated as derived tables,
so it was possible that the
subquery's best plan did not have the RowLimit operator, which may have
resulted in the choice of a poor performing plan. This has been fixed.
For example, the query:
select * from R where R.X = (select max(R.X) from R )
would have had the cost of the plan "R<idx_x>:[ RL [ R<idx_x>
]]" overestimated by the optimizer. Hence, it was more likely to use
"R<seq> JH [GrH [ R<seq>]]" as the best plan, which
computed the subquery as a derived table. This plan may have been inefficient
for queries where the table R was very large.
Q:
select * from R where R.X = (select max(R.X) from R )
================(Build #3615 - Engineering Case #487247)================
The server keeps track of dependencies of views on other views and tables.
In databases with an extremely large number of objects, the dependency information
could have become inaccurate. In order for the problem to have manifest itself,
some of the dependent or referenced objects must have had object ids that
were greater than 4G (2^32). This has been fixed. For existing databases,
problematic views must be recompiled with an updated version of the software.
If the actual number of current objects is much smaller than 4G, then the
problem can be resolved by unloading and reloading the database without the
need for a server software update. The reload should result in a compacting
of the used object id space.
================(Build #3615 - Engineering Case #487178)================
The server will no longer fail assertion 102300 - "File associated with
given page id is invalid or not open", when executing DROP DBSPACE if
the dbspace had beed deleted.
================(Build #3615 - Engineering Case #486476)================
Due to a memory leak, calling the the system procedure xp_sendmail() many
times, could have caused the server to crash. The memory leak has been fixed,
and xp_sendmail will now fail with the error "Insufficient memory"
(return code 15), if memory does become exhusted.
================(Build #3613 - Engineering Case #487001)================
The server could have hung in very rare timing dependent cases if it was
using the -z or -zr command line options for diagnostic messages and request
level logging. This has been fixed.
================(Build #3613 - Engineering Case #485593)================
The server could have become deadlocked while running concurrent REORGANIZE
TABLE statements on the same table. There can now only be one REORGANIZE
TABLE statement executing on a table at a time. Attempts to execute a second
REORGANIZE TABLE on the same table will now result in the error SQLSTATE_REORG_ALREADY_IN_PROGRESS.
================(Build #3612 - Engineering Case #486864)================
A server running with the -b command line option (run in bulk operations
mode) would have accepted more than one connection. This has been corrected.
================(Build #3612 - Engineering Case #486788)================
The changes for Engineering case 485499 introduced a bug which could have
caused the server, under certain circumstances, to crash when creating a
foreign key constraint. The problem has been fixed.
================(Build #3612 - Engineering Case #486775)================
Executing a remote query that must be handled in no-passthrough mode, and
that involved many tables, then there was a chance the query would have caused
a server crash. Such crashes would have been more likely with databases that
had a smaller page size. This problem has been fixed and the server will
now properly give an error when a no-passthrough mode query with too many
table nodes is executed.
================(Build #3612 - Engineering Case #486656)================
The execution of a LOAD TABLE statement would have caused the server to erroneously
fire INSERT triggers declared on the table being loaded. This has been corrected,
the server will no longer fire triggers when executing a LOAD TABLE statement.
================(Build #3612 - Engineering Case #485821)================
In very rare circumstances, and only on Unix platforms, queries on a given
table could have become very long running. The total query cost as reported
by a graphical or long text plan of such slow queries, would have been reported
as a negative number. This has been fixed.
A workaround is to drop and recreate statistics on the table in question.
================(Build #3611 - Engineering Case #486554)================
The changes for Engineering case 485200 resulted in a bug where attempting
fetch data from Micrsoft SQL Server or ASE could have failed with an "invalid
object name" error. This problem has been fixed.
================(Build #3611 - Engineering Case #486462)================
Database corruption could have occurred when execution of a LOAD TABLE statement
into a table with existing data failed and rows were subsequently inserted
or updated before a database restart. This has been fixed.
================(Build #3611 - Engineering Case #485875)================
On some combinations of consolidated database and platform, any UUID values
retrieved using the MobiLink Java direct row API could have had bytes swapped
to the local machine byte ordering. This has been fixed. UUID values retrieved
using the getBytes() function are now 16 byte values with the correct byte
ordering. UUID values retrieved using the function getString() are strings
in the correct UUID format (eg. "12345678-1234-5678-9012-123456789012").
================(Build #3611 - Engineering Case #485689)================
When attempting to insert a long binary column into a proxy table where the
value being inserted was fetched from a local table, there was a chance the
server could have hung with up to 100% CPU usage. This problem has been fixed.
================(Build #3610 - Engineering Case #486440)================
The server could have crashed when attempting to recover a database with
a corrupted transaction log file. This has been fixed.
================(Build #3610 - Engineering Case #486393)================
A query with an outer join could have caused the server to hang with 100%
CPU usage. This has been fixed.
================(Build #3609 - Engineering Case #485818)================
The server could have become deadlocked when deleting rows from a table following
the execution on an ALTER TABLE statement for a table that had foreign keys
referencing that table. This has now been corrected.
================(Build #3608 - Engineering Case #486059)================
Using AES_FIPS encryption for an extended period of time (eg. calling the
encrypt/decrypt functions hundreds of thousands of times) could have caused
the server to report an "out of memory" condition and shut down.
This could also have ocurred when running an AES_FIPS-encrypted database.
This has been fixed.
================(Build #3606 - Engineering Case #485939)================
Backups of encrypted databases created by executing the BACKUP DATABASE statement
with the "WITH CHECKPOINT LOG RECOVER" clause, may have contained
pages that did not decrypt properly. This has been fixed.
Note, any backups of encrypted databases using this clause should be considered
invalid and recreated with an updated server.
================(Build #3606 - Engineering Case #485874)================
Calling the system procedure sa_send_udp() have could have caused the server
to crash. This has been fixed.
================(Build #3605 - Engineering Case #486744)================
The server may have crashed or returned an unexpected error when attempting
to execute an UPDATE statement on a table without the WAIT_FOR_COMMIT=ON
clause, and that had a BEFORE UPDATE row level trigger, as well as an AFTER
UPDATE row level trigger that used the OLD value of the columns. For this
problem to have occurred the BEFORE UPDATE trigger must have changed the
NEW value of a column that was not part of the updated columns in the UPDATE's
SET clause, and the update operation must update some rows that violate referencial
integrity. This has been fixed.
================(Build #3605 - Engineering Case #485802)================
The query definitions of materialized views in SQL Anywhere are restricted
from using certain SQL constructs. As an example, materialized views are
prohibited from making references to user defined functions. The server was
erroneously rejecting the creation of materialized views that make references
to a table with computed columns when the definitions of the computed columns
made use of a SQL construct that was not allowed within materialized view
definitions. This has been corrected so that the creation of materialized
views under these circumstances will no longer result in an error.
================(Build #3605 - Engineering Case #485799)================
When undoing a failed ALTER TABLE or LOAD TABLE statement, the server could
have become deadlocked. This has now been corrected.
================(Build #3605 - Engineering Case #485700)================
ALTER INDEX or DROP INDEX statements done while a transaction snapshot was
active would always have failed with error -1062 "statement not allowed
during snapshot". This has been fixed so that it does so now only if
active snapshots remain after the commit that occurs at the beginning of
these statements. Normally, this will only happen if there is a cursor opened
"WITH HOLD" that is using a snapshot (either statement or transaction).
================(Build #3605 - Engineering Case #485498)================
The REBUILD clause of the ALTER INDEX statement can be used to recreate the
physical data structures underlying the index. If the ALTER INDEX REBUILD
statement was interrupted by the user, or failed for any reason, the server
could have left the physical structure in an undefined state. The most likely
state for the index after a failure was to contain no entries. This situation
could have caused subsequent queries using the faulty index to behave erroneously.
To rectify this situation, a failed ALTER INDEX REBUILD could be executed
again to completion. This has been fixed so that the server will now restore
the physical data structures to the same state as the one that existed prior
to execution of the failed statement.
================(Build #3603 - Engineering Case #485574)================
The server could have crashed when attempted to get an exclusive schema lock
on a table. This has been fixed.
================(Build #3603 - Engineering Case #485499)================
The server shares physical indexes between compatible primay keys, foreign
keys, unique constrains and secondary indexes. Two indexes are considered
compatible if the keys for the indexes contain exactly the same columns in
the same order and with the same sequencing of values (ascending or descending).
When creating a new foreign key index the server could have shared the physical
index with an existing index erroneously even when the order of columns did
not match. Note that the foreign key constraint was still correctly enforced,
but the index was created with an unintended column order which may be problematic
for queries that required the specified order for the index to be useful.
This has now been fixed.
A workaround is to create the other index after the foreign key index has
been created, or to declare an additional index with the correct column order.
================(Build #3603 - Engineering Case #484698)================
The index density calculation could have been incorrect in certain cases.
This has now been corrected.
================(Build #3603 - Engineering Case #483518)================
In some cases, requesting a graphical plan for a query could have caused
the server to fail with a fatal error: 'A read failed with error code: (38),
Reached the end of the file. Fatal error: Unknown device error.' This has
been fixed.
================(Build #3603 - Engineering Case #481125)================
If an application was trying to fetch data from a char/varchar column through
SQLGetData, and if the buffer size passed to SQLGetData was less than the
length of the data that was to be fetched, the iAS ODBC driver for Oracle
could have returned the total length of the data minus the length of a NULL
terminator to the client in the last call to the SQLGetData function. This
may have caused the application to report data truncation errors. This problem
has now been fixed.
================(Build #3602 - Engineering Case #485425)================
When executing a query containing proxy tables that would normally have been
handled in 'full passthru' mode, but cannot be handled by the server because
it is too complex, would have crashed the server. This has been fixed so
that the server now properly returns the error "-890: Statement size
or complexity exceeds server limits".
================(Build #3601 - Engineering Case #485378)================
In a mirroring system, if the transaction log files on the primary and mirror
were incompatible, the mirror server may not have properly detected this
condition and shut down. This has been fixed.
================(Build #3601 - Engineering Case #485338)================
When running the server with the command line option -m "truncate transaction
log after checkpoint", or when running the Backup utility dbbackup with
its command line options that restart the log -r, -x or -xo, the current
transaction log was renamed and then, depending on the option used, deleted.
If the rename of the transaction log failed, assertion failure 100910 was
raised. This message for this assertion has been changed to hopefully give
more information about the cause. The message was changed from:
"Error deleting transaction log file"
to
"Error renaming transaction log file before deleting it.
Error code: %d/%d"
The first number of the Error code means the error type (0 - POSIX error
number, 1 - OS error number). On Unix and Netware, both error types mean
POSIX error numbers. The second number is the actual error code. On Windows
the POSIX error numbers can be found in the header errno.h.
================(Build #3601 - Engineering Case #485271)================
For strongly encrypted databases, the statement ATTACH TRACING TO LOCAL DATABASE
fails correctly with the SQL error "ATTACH TRACING TO LOCAL DATABASE
cannot be used with a strongly encrypted database", but subsequent ATTACH
TRACING statements incorrect returned the SQL error "A tracing connection
is already active". This has been fixed.
================(Build #3601 - Engineering Case #485252)================
In some situations, the server could have crashed when executing a query
access plan that was built with parallel scans. This has been fixed.
A workaround for this problem is to set the option Max_query_tasks=1 to
avoid all parallel access plans. That change will degrade performance for
some queries, but will avoid the crash.
================(Build #3601 - Engineering Case #484980)================
The number of rows returned by a partial index scan of an unique index should
be at most one row, if all the columns of the index are used in an equijoin
predicate. This number was not being set correctly. This has been fixed.
================(Build #3600 - Engineering Case #485235)================
If a user-defined event executed a statement that referenced a proxy table
while the database was being shutdown, the server may have failed an assertion.
This has been fixed so that these statements now return an error.
================(Build #3600 - Engineering Case #485200)================
When connecting to ASE or Microsoft SQL Server, the Remote Data Access layer
was setting the quoted_identifier option to ON at connect time and then always
quoting identifiers when querying data from ASE or SQL Server. Unfortunately,
due to a restriction in the ASE server, always using quoted identifiers resulted
in problems if the column name was 29 or 30 characters in length. Now, when
connecting to ASE or SQL Server remote servers, the quoted_identifier option
on the remote is set to match the local setting.
================(Build #3600 - Engineering Case #485191)================
When Snapshot isolation was enabled, pages in the temp file could have been
leaked if long running snapshot transactions were used. This is has now
been corrected.
================(Build #3600 - Engineering Case #485160)================
If a connection had snapshot isolation enabled, and a non-snapshot transaction
was in progress, when a second connection with a snapshot or non-snapshot
transaction commited or rolled back without any outstanding snapshot transactions
having been started before the transaction in the first connection, then
there was a chance of assertion failure 201501 "Page for requested record
not a table page or record not present on page" on subsequent snapshot
queries. This has been fixed.
================(Build #3599 - Engineering Case #485073)================
Attempting to createg a proxy table may have caused the server to hang. Restarting
the server and running the create again would likely not reproduce the hang.
This problem has now been fixed.
================(Build #3599 - Engineering Case #484984)================
If an application connected using an older version of either Open Client
or jConnect, and then called the system procedure sa_conn_info, there was
a chance the client would have crashed. Column names longer than 30 characters
are a problem for older TDS clients. The problem has been fixed by properly
aliasing the column names in the sa_conn_info select statement.
================(Build #3599 - Engineering Case #484981)================
If REORGANIZE TABLE was executed in one connection, while simultaneously
dropping the primary key or clustered index on the same table in another
connection, the server could have crashed. This has been fixed.
================(Build #3599 - Engineering Case #484487)================
On Solaris 10 systems, calling the system procedure xp_cmdshell() may have
failed if the server's cache was large. This has been fixed. This problem
still affects Solaris systems running version 8 and 9; as the problem arises
from the implementation of the fork system call and cannot be worked around
safely on Solaris versions 8 and 9. A more complete expalanation can be found
at: http://developers.sun.com/solaris/articles/subprocess/subprocess.html.
================(Build #3599 - Engineering Case #484274)================
Attempting to execute a query using a window function with a PARTITION BY
clause, that consisted of nothing but constants or variables, could have
crashed the server.
For example:
create variable v int;
select sum(x) over ( partition by v ) from t
This has beenfixed.
================(Build #3599 - Engineering Case #483655)================
If an application connected using a TDS based client, and attempted to use
a procedure in the FROM clause of a SELECT statement, then the application
would have failed with a TDS protocol error. This problem has now been fixed.
================(Build #3597 - Engineering Case #484798)================
A query using a window function with a RANGE on a date column could have
returned a conversion error. This has been fixed.
================(Build #3597 - Engineering Case #484704)================
On Linux IA64 systems, "unaligned access" messages may have appeared
in the system log while the server was running. The problem that caused these
messages has been fixed. The message itself can be considered harmless in
this instance, and server operation was not affected.
================(Build #3597 - Engineering Case #484679)================
The server could have crashed, or failed an assertion, when reloading a procedure.
This could only have happened if another connection was unloading the procedure
almost simultaneously. This has now been fixed.
================(Build #3597 - Engineering Case #484145)================
Using a host variable of type nchar, nvarchar or long nvarchar in a query
that references proxy tables would likely have caused the server to report
a syntax error, rather than execute the query. The server was not handling
nchar based host variables correctly. It would at times assume the data was
in the database charset instead of utf8 .This problem has now been fixed.
================(Build #3597 - Engineering Case #472486)================
On Linux IA64 systems, the server may cause "floating-point assist"
faults. These warnings are as result of operations on denormal (quantities
too small to be represented in normalized form) floats and doubles in the
server, and can be considered harmless, especially if the number of warnings
is low. The Itanium CPU is incapable of operating on denormal numbers in
hardware and enters software emulation mode when such an operation is requested.
The Linux kernel detects this and displays a performance warning, as the
software emulation entails a performance penalty. The correctness of server
operations is not compromised. All the same, the server has been modified
to minimize internal use of denormal numbers. However, the warnings will
still appear if the server is explicitly requested to operate on denormals
(for instance, with the statement "SELECT 2.25e-309").
================(Build #3596 - Engineering Case #484605)================
Under heavy load, and while another connection was executing DDL statements,
calls to a user-defined function could have resulted in a server crash. This
has been fixed.
================(Build #3596 - Engineering Case #484456)================
When running on VMWare, the SQL Anywhere server may have crashed on start-up.
There isn't a risk of database corruption. This was due to the server determining
that the number of CPUs available was 0. This has been fixed.
================(Build #3596 - Engineering Case #477194)================
Executing a REORGANIZE TABLE statement while other connections were also
performing DML on the table, could have caused the database server to fail
an assertion. Such assertions would have included 100701: "Unable to
modify indexes for row referenced in rollback log". The REORGANIZE TABLE
statement could have looped infinitely if another connection attempted to
drop the table. A failed REORGANIZE TABLE, due to a deadlock or perhaps
some other error, could have caused the server to crash. These problems
have now been fixed.
================(Build #3596 - Engineering Case #473622)================
Re-executing the exact same "FORWARD TO" statement multiple times
on the same connection could have failed with various errors. FORWARD TO
statements were incorrectly being cached, but they must be prepared each
time they are executed. This has been fixed by no longer caching FORWARD
TO statements.
A workaround is to disable client statement caching for the connection by
setting the max_client_statements_cached temporary option to 0.
================(Build #3596 - Engineering Case #464477)================
After the execution of an ALTER TABLE [ADD | DROP | MODIFY ] COLUMN statement,
the server would have failed to reload trigger and stored procedure definitions.
This reload should have caused a recompile of the trigger or procedure definitions,
which would have altered the semantics of their statements if they depended
upon the ALTERed column. As an example, if a query in a trigger definition
used the syntax "SELECT *" and referenced the modified table, an
incorrect number (or type) of columns would have been returned in the query's
result. This oversight has been corrected.
================(Build #3595 - Engineering Case #485597)================
The changes for Engineering case 480208 introduced a problem where attempting
to insert a long varchar or binary value into a proxy table on 64-bit platforms
could have crashed the server. This problem has been fixed.
================(Build #3595 - Engineering Case #484054)================
If a statement used the LOCATE() function on a long string, it could have
lead to a reduction in concurrent execution of other statements. Further,
it could have caused the statement to take a long time to respond to a cancel
operation. Similarly, some queries that used a sequential scan with predicates
that reject most of the rows in the table, might have taken a long time to
respond to a cancel operation. These problems have now been fixed.
================(Build #3592 - Engineering Case #484269)================
When run on Unix systems, the server could have exhibited poor disk I/O performance
on mult-spindle disk systems. This has been fixed.
================(Build #3592 - Engineering Case #484262)================
In very rare circumstances, the server may have issued a fatal checksum assertion
for database page 0 while in the process of doing a backup. This has been
fixed.
================(Build #3592 - Engineering Case #484256)================
Attempting to execute a query with an invalid dotted reference involving
proxy tables, would very likely have caused a server crash.
For example:
select prod_id.sum(sales) from sales group by prod_id order by prod_id
This query was intended to be "select prod_id, sum(sales) ..."
but a typo replaced a comma with a period. If the table prod_id was a remote
table, then the above mistyped query would have crashed the server. This
problem has now been fixed.
================(Build #3592 - Engineering Case #484178)================
Reloading histograms into a database using a 64-bit server may have failed
if the database has been run previously with a 32-bit server. This has been
fixed.
================(Build #3592 - Engineering Case #484160)================
Attempts to revoke connect permission from a user that still had an externlogin
mapped, would have failed with a strange foreign key error, after which,
logging in as the user that was being dropped would have resulted in the
user permissions being messed up. The server now properly gives an error
indicating the externlogins are still mapped for the user and leaves the
user permissions unchanged.
================(Build #3592 - Engineering Case #484046)================
Specific forms of the IN predicate could have caused the server to crash.
This has been fixed.
================(Build #3592 - Engineering Case #481834)================
Applying a transaction log to a database using the server command line option
-a, could have failed assertion 100902: "Unable to find table definition
for table referenced in transaction log -- transaction rolled back".
This would have occurred if one log (B) contained all of the operations of
the preceding log (A), plus additional operations, and a sequence of modifications
to a table T by one connection, which begin in log A and continued into log
B, were active when the backup that created log A was performed. This has
been fixed. The fix affects the contents of the transaction log at the time
a backup is performed; thus, a fixed server does not permit log backups created
prior to the fix to be applied.
Note that the problem does not affect backups where a log rename is performed
after each backup, since in that case the logs will not contain overlapping
sections. The problem can be avoided in version 10 by applying all of the
logs at one time using the -ar or -ad options, which will cause the server
to skip logs that are included by subsequent logs.
================(Build #3592 - Engineering Case #481099)================
If a server already had a database running, and that database was not the
utility database, then attempting to connect to the utility database would
have failed for a TDS based connection. This problem has been fixed.
================(Build #3592 - Engineering Case #480208)================
If a proxy table to a table on a Microsoft SQL Server remote server had a
long character column, attempting to insert a string longer than 254 characters
into that column would very likely have caused SQL Server to have returned
a "partial update/insert" warning. This problem has now been fixed.
================(Build #3591 - Engineering Case #483960)================
In very rare circumstances, the server may have crashed if a fatal assertion
was encountered during a backup. This has been fixed.
================(Build #3591 - Engineering Case #483441)================
In a database mirroring system, if the mirror or arbiter connection strings
contained any of the DoBroadcast, TDS, BroadcastListener, LocalOnly, ClientPort,
VerifyServerName, LDAP, or DLL TCPIP parameters, that server could have crashed,
or failed to connect to the partner or arbiter servers. This has now been
fixed.
================(Build #3591 - Engineering Case #480738)================
Starting with version 10.0, the number of potential plans the optimizer can
choose from has increased dramatically due to parallel access plan enumeration,
and usage of materialized views. With very complex queries (joins of more
than 20 tables), for which the optimizer enumerates parallel plans as well
as non-parallel plans, the valid best plan may not have been found before
pruning took place. This has been fixed.
================(Build #3590 - Engineering Case #483845)================
Statements referencing some specific forms of the LIKE predicate could have
caused a server crash when the statement was prepared. This has been fixed.
================(Build #3590 - Engineering Case #483815)================
If a procedure that referenced proxy tables was used in the FROM clause of
a SELECT statement, and the first statement in the procedure is not a SELECT
statement, it was very likely that the server would have crashed. The problem
may also happen if such a SELECT was used as the subselect in a DML statement.
This has now been fixed.
================(Build #3589 - Engineering Case #483559)================
The server could have gone into an endless loop, with very high CPU usage,
instead of reporting an error when it ran out of free pages in the cache.
This has been fixed.
================(Build #3586 - Engineering Case #481493)================
In rare timing dependent cases a Unix application could have hung if prefetch
was enabled and only part of a result set was fetched. This has been fixed.
================(Build #3584 - Engineering Case #482958)================
It was possible, although rare, for the optimizer to select less than optimal
access plan for complex queries. This has been fixed.
================(Build #3583 - Engineering Case #483356)================
The server could have become deadlocked when a connection attempted to block
while updating row. This is more likely to happen when the server is under
heavy load. This has now been corrected.
================(Build #3583 - Engineering Case #483223)================
The server could have crashed when executing the sa_transactions system procedure.
This has been fixed.
================(Build #3582 - Engineering Case #483913)================
If a procedure, trigger, or view was created using EXECUTE IMMEDIATE, or
via Sybase Central, a trailing semi-colon included at the end of the definition
may not have been stripped out of the preserved source for the object, resulting
in problems when the database was unloaded and reloaded. This has been fixed.
Re-creating the object without specifying a trailing semi-colon will correct
the problem.
================(Build #3582 - Engineering Case #482977)================
If a global variable had the same name as a procedure's parameter, statements
within the procedure could have executed using the wrong variable. For this
to have occurred, no other variable references could occur between the last
reference to the global variable and the execution of the statement referencing
the local variable in the procedure. This has been fixed. One workaround
is to use a different name for the global variable.
================(Build #3582 - Engineering Case #482593)================
If a query contained a subquery that used a hash filter predicate, such as
hash(T.x) in hashmap(R.y), then it was possible for the hash filter to inappropriately
reject rows that ought to be accepted resulting in an invalid result set.
This has been fixed.
================(Build #3581 - Engineering Case #482952)================
The server may have crashed when doing extensive console logging (e.g. when
-zr all was used). This was more likely to have occurred on multi-processor
machines, and has now been fixed.
================(Build #3581 - Engineering Case #482841)================
Execution of a LOAD TABLE statement may have performed poorly if it was used
on a table containing a column with DEFAULT AUTOINCREMENT. The server was
unnecessarily doing a full reverse index scan to determine the new maximum
value for the autoincrement column after the LOAD TABLE. This has been fixed.
================(Build #3581 - Engineering Case #475488)================
Inserting short or highly compressible data into a compressed column (either
through INSERT or LOAD TABLE) may have created an excess number of free pages
in the database. This has been fixed.
================(Build #3580 - Engineering Case #482717)================
If the database option tsql_variables was set to ON (as it would be for OpenClient
connections), executing a CREATE DOMAIN statement containing identifiers
beginning with @ would have created the domain incorrectly. This would have
resulted in the error "not enough values for host variables" on
a subsequent operation on a table containing a column defined using the domain.
Also, executing a CREATE TRIGGER statement with an identifier beginning with
@ prior to the body of the trigger would result in a syntax error. Both of
these problems have been fixed.
================(Build #3579 - Engineering Case #482615)================
An unexpected SQL error may have been received when creating views over columns
with character length semantics. This could have occurred with NVARCHAR
or CHAR length VARCHAR colums when using a database with a multi-byte character
set.
For example:
CREATE TABLE test(col NVARCHAR(8191));
CREATE VIEW test_view AS SELECT col from test;
or, the following, when issued on a UTF-8 database:
CREATE TABLE test(col VARCHAR(8191 CHAR));
CREATE VIEW test_view AS SELECT col from test;
would have failed with SQL error CHAR_FIELD_SIZE_EXCEEDED (-1093): "The
size of the character column, variable, or value data type exceeds 32767".
This has been fixed.
================(Build #3578 - Engineering Case #481607)================
The database file size may have continued to grow, even when it was not expected
to. This problem could have occurred any time, but would likely be more noticeable
when constantly inserting and deleting the same set of rows (given the conditions
outlined below), running without a redo log, or when checkpointing frequently.
The main symptom of this problem was that the number of pages allocated to
a table's extension arena continues to increase during inserts, deletes or
updates when the amount of data in the table remained constant. There were
two main ways this problem would have been more readily noticed. First,
by running without a transaction log or when checkpointing frequently. In
this case, the number of pages in the ISYSTAB extension arena grows. Second,
when doing repeated updates, or paired inserts and deletes to a particular
set of rows, when the number of overall rows did not increase. For the problem
to occur in this case, the rows must have contained any type of CHAR or BINARY
value (i.e., VARCHAR, LONG VARCHAR , etc.), and must have values longer than
the column's INLINE amount. If the table was truncated or dropped, the extra
pages allocated to the extension arena would have been freed up and made
available for other uses. This has now been fixed.
Note, when this problem is noticed, rebuilding the affected database with
the fix will eliminate the extra pages from the extension arena.
================(Build #3574 - Engineering Case #481904)================
Attempting to call a non-existent procedure using the EXECUTE IMMEDIATE statement
and the 'WITH RESULT SET OFF' clause (ie. EXECUTE IMMEDIATE WITH RESULT SET
OFF 'call sp_nonexist();'), would have caused the connection to hang. This
has been corrected so that an error indicating that the procedure does not
exist is now reported.
================(Build #3574 - Engineering Case #481893)================
Referencing a stored procedure in the FROM clause of a query could have incorrectly
returned a "permission denied" error. This would have occurred
when the following conditions were true:
- the procedure was owned by a user with DBA authority
- the procedure contained a single SELECT statement and no other statements
- permission to execute the procedure was granted to a non-DBA user
- the procedure's SELECT referenced a table for which the procedure owner
had not been granted permissions
This has been fixed. A workaround is to add a statement such as "if
1 = 0 then return end if;" to the start of the procedure.
================(Build #3574 - Engineering Case #480776)================
Queries with access plans that did scans of unique indexes, may have returned
incorrect results when using snapshot isolation. This has been corrected.
================(Build #3573 - Engineering Case #481964)================
The procedure dbo.sa_disk_free_space contained a redundant permissions check,
and has been removed. The procedure can now be called if any of the following
conditions hold:
- the caller has DBA authority
- the procedure is called from another procedure owned by a user having
DBA authority
- execute permission has been granted
To correct the problem in existing databases without rebuilding or upgrading,
the call to dbo.sp_checkperms in dbo.sa_disk_free_space can be removed.
================(Build #3572 - Engineering Case #481970)================
The setting of the ASTMP_UMASK environment variable would have been ignored
on HP-UX systems. Also, the umask setting was ignored in the creation of
the lrm_socket on HP-UX and AIX. These problems have now been fixed.
================(Build #3572 - Engineering Case #481894)================
In rare cases, calling a secure web procedure could have caused the server
to crash. This has been fixed.
================(Build #3572 - Engineering Case #481891)================
When similar requests are executed concurrently, there would have be large
variations in their response times. This problem would have shown up more
on single processor machines. This has been fixed
================(Build #3570 - Engineering Case #481649)================
It was possible, although very rare and timing related, for backups to have
hung. The backup could hava been been initiated from either the dbbackup
utility, or the BACKUP DATABASE statement. When this problem occurred, other
requests would have proceeded as normal; however, since a backup prevents
checkpoints, any connection that issued a checkpoint would have been blocked.
This has now been fixed.
================(Build #3570 - Engineering Case #481644)================
Same machine connections were using communication compression if the COMPRESS
connection parameter or -pc server option was specified. This has been fixed
so that only remote connections will be compressed.
The sa_conn_compression_info procedure could have reported incorrect compression
rates or packet compression rates if more than about two million bytes had
been transferred since the server started. This has been fixed as wll.
In order to get the fixed sa_conn_compression_info procedure, the ALTER DATABASE
UPGRADE PROCEDURE ON statement must be executed (this will also upgrade the
database to the current version, if necessary).
================(Build #3570 - Engineering Case #481114)================
The server could have failed to report an I/O error, such as a full disk,
when using asynchronous I/O on Linux kernels 2.6.12 and higher. This has
been corrected.
================(Build #3569 - Engineering Case #474289)================
If an application maked use of external function calls, the server may have
crashed if the application disconnected while a call to an external function
was being executed; or if enough connections called external functions such
that the server's worker thread limit was exceeded and a deadlock condition
arose. This problem has been fixed. When a deadlock condition arises, the
error "All threads are blocked" will result for one or more connections
executing an external function call. To avoid this error, the server option
-gn can be used to increase the number of worker threads.
When a client disconnects during execution of an external function call,
the server will no longer crash.
================(Build #3564 - Engineering Case #481017)================
When run on Linux systems, the server may have displayed unpredictable behaviour
involving scheduled events, mirroring, the request_timeout option, and possibly
other functionality in the server that relied on timers at start-up. There
was a greater probability of this occurring on multi-processor machines.
This has been fixed.
================(Build #3564 - Engineering Case #480895)================
When registering the performance monitor counters DLL (dbctrs9.dll or dbctrs10.dll)
manually from a non-elevated process on Windows Vista (eg, regsvr32 dbctrs10.dll),
the performance counters may not have actually been registered correctly,
even though regsvr32 reported that the dll was successfully registered. This
problem has now been fixed.
Note that the SQLAnywhere installation is performed from an elevated process
and will not be affected by this bug.
================(Build #3564 - Engineering Case #480072)================
If a database was using a non-UCA collation (such as UTF8BIN) for NCHAR,
the server could have produced unexpected results such as different orderings,
or result sets, for the same data or corrupt indexes on NCHAR columns containing
long strings. In some cases the server may have crashed if the query ran
a sorting strategy. This has been fixed, but existing databases must be unloaded
and reloaded. A new capability bit has been added so that databases created
with the fixed engine cannot be started by servers without the fix.
Applications can determine if a database has the fix by querying the new
db_property( 'HasNCHARLegacyCollationFix' ). For servers released prior to
this fix, this function will return NULL. For 10.x databases running on a
server with this fix, this function will return 'On' if the database was
created by an engine with the fix AND uses a legacy NCHAR collation. Otherwise,
it returns 'Off'. For databases created after version 10, this function will
return 'On'. This approach allows newly-created 10.x databases that use a
UCA NCHAR collation to be started with older software.
When the server starts a database that was created by a server released
prior to this fix and the database uses a non-UCA NCHAR collation and has
at least one table defined to use an NCHAR column, the following warning
message will be displayed in the server's message window (the text &
url are yet to be finalized):
Warning: database \"%1\" is using an NCHAR collation which for
which a fix has been made available. See http://ianywhere.com/developer/product_manuals/sqlanywhere/notes/en/nchar_collation_warning.html
Note that a database created by a server prior to this release could still
have problems sorting NCHAR data if NCHAR uses a legacy collation but the
warning is only displayed if NCHAR columns exist.
================(Build #3562 - Engineering Case #480547)================
If a scheduled event was created with a start time that included milliseconds,
the event may have fired once, but then not have fired again. This has been
fixed. As a workaround, the schedule's start time can be altered to exclude
the milliseconds component.
================(Build #3561 - Engineering Case #480450)================
The server may have crashed calling an external function using the old-style
API, if that function attempted to return an unsupported data type. This
has been fixed so that this situation will return the error "Expression
has unsupported data type".
================(Build #3561 - Engineering Case #480423)================
A query that involved more than one Remote Procedure Call in the FROM clause
could have caused the server to crash.
An example of such a query is:
SELECT col1
from remote_procedure_1()
where col2 = (select c1 from remote_procedure 2())
This problem has now been fixed.
================(Build #3561 - Engineering Case #480317)================
The Server Licensing dblic may have crashed when it was asked to license
the server executable instead of the corresponding license file. With the
introduction of license files in 10.0.1, dblic now operates on the license
file instead of the server executable file, but tries to automatically determine
the name of the license file from the executable file name. In this case
however the utility crashed while doing this. This has been fixed.
================(Build #3560 - Engineering Case #480125)================
A database with a corrupt checkpoint log could have grown large during database
recovery instead of failing with an assertion error. In some cases the database
could have grown so large that a fatal error: disk full could have been caused.
This type of corruption in the checkpoint log is now detected and assertion
201864 is raised in such an instance. This type of corruption is likely
to be caused by a disk problem.
================(Build #3560 - Engineering Case #480055)================
In some circumstances, backups could have taken longer to complete than expected.
This problem would have been noticed only if the total size of the database
files was greater than 2GB, and was more likely to be noticed when backing
up a very small fraction of that database size (e.g., such as when doing
a TRANSACTION LOG ONLY backup with a small transaction log). This has been
fixed.
================(Build #3560 - Engineering Case #479327)================
If an UPDATE statement on a view has more than one table expression in the
update clause after view flattening, and the table expression that would
have been updated is in the update clause preceded by a non-updatable table
expression, then the UPDATE did not change any rows.
Here are two examples:
1) This UPDATE statement has in its update clause first a non-updatable
table expression V1 and then the updatable table T1. It does not update column
"b" of table T1.
update ( select sum(T2.c) as xxx from DBA.T2 ) as V1( xxx) ,T1
set T1.b = isnull(V1.xxx,0)
where V1.xxx = T1.a and V1.xxx is not null and T1.a is not null
2) This UPDATE has two table expressions V1 and T1 in the update clause
after view replacement of V11. If the order of the table expressions in the
update clause is like in the example above, the UPDATE does not change any
rows.
create view V11 as select T1.b, isnull( V1.xxx, 0 ) as v2_2
from ( select sum(T2.d) as xxx
from T2 ) V1 right outer join T1 on V1.xxx
= T1
update V11 set b = v2_2
This problem has been fixed.
================(Build #3560 - Engineering Case #471948)================
If an HTTPS connection attempt failed because of licensing (i.e. too many
connections), the client would have received a plain-text error message,
instead of handshake data. HTTPS clients would have interpreted this as a
protocol error. This has been fixed.
================(Build #3559 - Engineering Case #479966)================
The row limitation FIRST or TOP n may have been ignored, and more rows returned,
if the select statement had an ORDER BY clause and all rows returned were
equal with regard to the ORDER BY clause. This has been fixed.
================(Build #3559 - Engineering Case #479816)================
Executing a REORGANIZE TABLE could have caused the server to fail assertion
100701 - "Unable to modify indexes for a row referenced in rollback
log -- transaction rolled back". This has been fixed.
================(Build #3558 - Engineering Case #479959)================
The SORTKEY & COMPARE system functions did not support legacy ASA collations
correctly. The following problems have been fixed.
When the collation specification argument to the SORTKEY or COMPARE functions
referred to a valid SQLAnywhere legacy collation, the server would silently
use the UCA instead. For example, SORTKEY( 'a', '1252LATIN1' ) would generate
the same result as SORTKEY( 'a', 'UCA' ).
A small amount of memory would be leaked any time a connection was disconnected,
provided that the connection had used SORTKEY or COMPARE at least once with
a collation specification that was non-numeric and was not one of the built-in
ASE compatibility labels.
When an invalid collation specification was passed to the COMPARE function,
the error message that was generated (INVALID_PARAMETER_W_PARM, error -1090)
did not correctly substitute the collation specification.
For example, COMPARE( 'str1', 'str2', 'invalidcollationlabel' ) would generate
the message "Function 'compare' has invalid parameter '3' ('str2')".
It now generates the message "Function 'compare' has invalid parameter
'3' ('invalidcollationlabel')".
================(Build #3558 - Engineering Case #478055)================
The server allows users to create temporary stored procedures that exist
for the duration of the connection that creates the procedures. When creating
these temporary stored procedures the server disallows the specification
of an owner, i.e., temporary procedures are always owned by the user creating
them. If a temporary procedure was referenced by ia qualified owner name
in a query, the server would have failed to find the temporary procedure.
The server now correctly finds qualified temporary procedures. A work around
is to refer to temporary procedure by name only.
Once a user had created a temporary procedure, the server would have allowed
the creation of another temporary or permanent procedure by the same, resulting
in duplicate procedures. The creation of duplicate temporary or permanent
procedures is now not permitted. Note that the server already prevents the
creation of a duplicate temporary or permanent procedure when the already
existing procedure is a permanent one.
================(Build #3558 - Engineering Case #466790)================
If a query contained at least two predicates "T.col = (subselect1)"
and "R.col = (subselect2)" and both predicates could be used as
fence posts for index scans on the tables T and R respectively, then the
optimizer would have under estimated the cardinality of the joins which may
have resulted, for complex queries, in unoptimal plans. This has been fixed.
================(Build #3557 - Engineering Case #479694)================
The problems described in Engineering cases 468864 and 478161 (inability
to reuse an alternate server name if a server using it terminated abruptly)
may still have occurred on Unix operating systems. On Linux systems these
problems could still have occurred when the file systems used were other
than ext2 and ext3. This has been fixed
================(Build #3557 - Engineering Case #479320)================
On databases with multi-byte character sets, or when using multi-byte values
(such as NCHAR), the string trim functions, trim(), ltrim() and rtrim() may
have left blanks or extraneous bytes in the portion of the string that was
intended to be trimmed. For this to have occurred, multi-byte characters
must have appeared within the string data. This is now fixed.
================(Build #3556 - Engineering Case #479467)================
Getting the query plan for a statement, using the SQL functions PLAN, EXPLANATION
or GRAPHICAL_PLAN, could have caused the server to crashed if the plan for
the statement had been cached. This has been fixed.
================(Build #3554 - Engineering Case #479204)================
When request logging of plans was enabled, and a procedure that was being
used in a query was dropped immediately before the cursor was closed, the
server rare circumstances could have crashed. This has been fixed.
================(Build #3554 - Engineering Case #479203)================
The server could have crashed during recovery of a database that had gone
down dirty while it was heavily-loaded. This has been fixed. The database
file itself was not damaged and using an server with this fix will allow
recovery to proceed.
================(Build #3554 - Engineering Case #479098)================
When the HTTP server was under heavy load for a long period of time it was
possible that it could have transitioned into a state where many contiguous
requests were incorrectly rejected with a 503 error; 'Service Temporarily
Unavailable'. This has been fixed, HTTP requests are now only rejected in
this way when resources such as memory or licensing are unavailable.
================(Build #3554 - Engineering Case #479061)================
Executing a DROP DATABASE statement could have caused a server crash or failed
assertion if the database being dropped needed recovery. This has been corrected.
================(Build #3554 - Engineering Case #479053)================
For some CPU bound queries, execution times could have been longer with Unix
servers than execution times compared to an equivalent Windows server. This
has been fixed.
================(Build #3554 - Engineering Case #478217)================
Executing some simple forms of INSERT, UPDATE and DELETE statements could
have caused the server to crash when used with particular host variable values.
This has been fixed.
================(Build #3553 - Engineering Case #478909)================
After having established a keep-alive connection from a browser request,
a shutdown may have caused the server to crash. This is a timing issue that
has only been seen when using Firefox. This has been fixed.
================(Build #3553 - Engineering Case #478752)================
Cancelling a database validation could have caused the server to crash.
The validation could have been issued from dbvalid, the Sybase Central plugin,
or from the VALIDATE DATABASE statement. A crash from this problem would
have been relatively rare, as it could only have happened during a relatively
small window of opportunity within a validation, but would have been more
likely in databases with many indexes per table. This has now been fixed.
================(Build #3553 - Engineering Case #478654)================
A blob column defined with the INLINE or PREFIX specification, with a size
that was near the database page size, could have caused the server to crash.
This problem would have only occurred if the INLINE or PREFIX specifier was
within approximately 50 bytes of the database page size, and string data
longer than that amount was inserted into the table. This has now been fixed.
================(Build #3552 - Engineering Case #444898)================
The server may have crashed when using cached plans in nested procedures.
This crash would have been rare, and was likely to appear only when there
was heavy competition for cache memory. This has been fixed.
================(Build #3551 - Engineering Case #477494)================
Stale materialized views were not used by the optimizer when the option
materialized_view_optimization was set to STALE. This has been corrected.
For further information on the option Materialized_view_optimization, please
refer to the SQL Anywhere® Server - Database Administration manual.
================(Build #3550 - Engineering Case #478161)================
On UNIX systems, if the primary and mirror servers were both run on the same
machine and the environment variable SATMP either pointed to the same location
for both, or was unset, the mirror server may have shut down with the error
"The alternate server name provided is not unique". This would
have occurred if the primary server was killed with the signal SIGKILL, or
if it is brought down by an abnormal event, and it did not have a chance
to delete temporary files. This has been fixed.
================(Build #3550 - Engineering Case #477995)================
Attempting to run an archive backup (ie. BACKUP DATABASE TO ... ) would have
caused a server crash if the database being backed up had no transaction
log file. This has been fixed.
================(Build #3550 - Engineering Case #476080)================
A primary server in a synchronous mirror configuration, running on a Unix
system, could have stopped servicing TCPIP connection packets if number of
client connections exceed the number of worker threads (ie. -gn value). Another
symptom would have been that existing connections would have been disconnected
with liveness timeout errors. This has been fixed.
================(Build #3549 - Engineering Case #477918)================
In a database mirroring system running in async mode, if a heavy volume of
update activity occurred on the primary server for an extended period of
time, the mirror server could have crashed as a result of running out of
memory. This has been fixed.
================(Build #3548 - Engineering Case #477777)================
When attempting to insert a value into a string column of a proxy table using
a variable, there was a chance the value inserted may have been NULL. This
problem only happened if the string column involved the evaluation of an
expression (like concatenation). For example, the following should work fine:
create variable v char(128);
set v = 'abc';
insert into proxy_t(char_col) values(v);
whereas, the following had a chance of inserting a NULL, instead of the
concatenated value:
create variable v char(128);
set v = 'abc' || 'def';
insert into proxy_t(char_col) values(v);
This problem has now been fixed.
================(Build #3548 - Engineering Case #477637)================
If the network connection to the primary server (S1) in a database mirroring
system was lost, a failover to the mirror server (S2) would have occurred
as expected. However, once the network connection was restored, S1 would
have reported that its database files were incompatible with the current
primary (S2). This has been fixed.
================(Build #3547 - Engineering Case #478064)================
An HTTP request having multiple empty variables may not have always represented
the variable as null, but sometimes as the empty string ''. This has been
fixed.
================(Build #3547 - Engineering Case #478062)================
On a busy server, it was possible that a failed HTTP connection may have
caused the server to crash. This has been fixed.
================(Build #3547 - Engineering Case #477505)================
The Unload utility (dbunload) can be used to unload materialized view data
when given the command line options "-d -t <mat_view_name(s)>".
The changes for Engineering case 469705 introduced a problem where data for
materialized views was also being unloaded to disk during a "normal"
unload. This has been fixed.
================(Build #3547 - Engineering Case #477286)================
Garbage characters could have appeared in the request-level log on lines
containing query plans. This has been fixed.
================(Build #3544 - Engineering Case #476187)================
When opening a data reader, prepared statements were dropped and re-created
again. This has been fix so that a command can now only open one data reader
at any time.
================(Build #3543 - Engineering Case #477181)================
The SQL Anywhere server tries to maintain column statistics so that they
reflect modifications being made as a result of data change statements. The
server could have incorrectly modified existing column statistics under certain
circumstances when executing searched UPDATE statements. The problem was
likely to only take place when the new column values were the same as existing
values for a large proportion of candidate rows. This has now been corrected.
A potential remedy for this incorrect behaviour is to recreate column statistics
by using the CREATE STATISTICS statement. Also, the server may eventually
remedy the problem itself as a result of query execution feedback.
================(Build #3542 - Engineering Case #477195)================
The value of a numeric variable could have been rounded incorrectly when
used in a statement other than a SELECT statement. The precise conditions
under which this problem would have occurred are difficult to describe, but
involve the use of the variable as part of a larger expression.
For example, the following batch illustrates the problem. It would have
returned 2.0, rather than the correct 2.6:
BEGIN
DECLARE varNumeric NUMERIC;
DECLARE LOCAL TEMPORARY TABLE ltBUG (Dummy NUMERIC);
SET varNumeric = 1.3;
INSERT INTO ltBUG (Dummy) VALUES (ISNULL(varNumeric, 0) + ISNULL(varNumeric,
0));
SELECT * FROM ltBUG;
END;
This issue has been fixed.
================(Build #3542 - Engineering Case #472772)================
Mini-core dumps generated by the server on Linux systems may not have loaded
properly in the debugger, or would not have shown any stack traces. In order
to limit the size of the mini-core files on Linux, the size of a single data
segment in the dump was limited, to 2MB. On some Linux systems this was not
sufficient so it has been increased to 5MB.
================(Build #3540 - Engineering Case #476211)================
Queries using a UNION or UNION ALL, that exist within a stored procedure,
function, trigger, or event, could have caused server instability (most likely
either a crash or a failed assertion). This has been fixed. A workaround
is to set the MAX_PLANS_CACHED option to zero.
================(Build #3539 - Engineering Case #476544)================
In rare cases, the server could have crashed executing an ALTER TABLE statement.
This would have most likely occurred when removing columns from a table.
This has been fixed.
================(Build #3539 - Engineering Case #476194)================
If a binary bitwise operator (i.e. AND, OR, or XOR) was performed on a (signed
or unsigned) bigint or a numeric, then the operator would have incorrectly
cast both arguments to 32-bit integers. This could have resulted in inappropriate
errors being thrown.
For example, the following script generates an overflow error:
create table types_bitwise( e_ubigint unsigned bigint );
select e_ubigint & 12345678901234567890 from types_bitwise;
This problem could also have occurred if the unary bitwise operator NOT
were performed on a numeric argument. This issue has been fixed.
================(Build #3535 - Engineering Case #478498)================
The optimizer relies on accurate statistics to generate efficient access
plans. Statistics may be obtained from column and index statistics among
other sources. Absence of good statistics can cause the optimizer to pick
access plans that execute slowly causing complex queries to suffer.
An improvement to the optimizer has been made so that the performance of
complex queries with expensive table scans, under cetain circumstances, does
not suffer even when good statistics are not available.
================(Build #3535 - Engineering Case #476067)================
The CREATE INDEX statement was not respecting the setting of the Blocking
option, and would always have failed if another transaction had the table
locked (for shared or exclusive access). As of version 10.0, this is more
likely to be an issue as the cleaner locks tables temporarily as it does
its work. In particular, a reload could have failed with errors of the form
"User 'another user' has the table ... locked". This has now been
fixed.
================(Build #3533 - Engineering Case #475618)================
The changes made for Engineering case 468319 introduced a problem where if
many concurrent connections were made to the server, and then disconnected,
the server may have taken several additional seconds to shutdown after the
databases were shutdown. This has been fixed.
================(Build #3533 - Engineering Case #475613)================
If a database had an alternate server name, and was running on a system that
was using LDAP, then the alternate server name would not have been unregistered
from LDAP if the database was shut down (but the server is not). This has
been fixed.
================(Build #3531 - Engineering Case #475278)================
If a table was defined with a char(255) or varchar(255) column, and then
BCP IN was used to insert a string of length 251 or greater bytes into the
char(255) column, then BCP would either have failed with a protocol error,
or, the value NULL would have been inserted into the column. This problem
has now been fixed.
================(Build #3531 - Engineering Case #475228)================
If a remote table had a wchar column and an attempt was made to create a
proxy table to the remote table, but with the wchar column being mapped to
a varchar column, then the server would have failed with a datatype not compatible
error. While this error was correct, it nevertheless was unexpected given
that mapping remote wchar columns to local varchar columns worked in prior
versions. Mapping remote wchar columns to local varchar columns is now allowed
for backwards compatibility, but not recommended. Wchar columns should instead
be mapped to nchar columns, if at all possible.
================(Build #3531 - Engineering Case #475052)================
Under heavy load, a mirror server could have generated a corrupt transaction
log. The mirror server could have failed in several different ways, including
failed assertions 100902, 100903, and 100904. This has been fixed.
================(Build #3531 - Engineering Case #473699)================
Queries containing derived tables, or views which were eliminated during
semantic rewriting, may have incorrectly returned syntax errors such as "Column
'...' not found" (SQLE_COLUMN_NOT_FOUND, -143, 52003). For this problem
to have occurred, the derived tables or views must have had at least a subselect
or a subquery. This has now been fixed.
Example:
The derived table T4 contains a subselect "( select dummy_col from
sys.dummy T2 where 1 = T3.row_num ) as c1". T4 is redundant in this
query and it is eliminated by the outer join elimination process. The
query Q1 is equivalent to " SELECT DISTINCT T1.dummy_col FROM sys.dummy
T1 "
Q1:
SELECT DISTINCT T1.dummy_col
FROM
sys.dummy T1
left outer join
( select row_num c0,
( select dummy_col from sys.dummy T2 where 1 = T3.row_num ) as c1
from dbo.rowgenerator T3
) T4
on 1=1
================(Build #3529 - Engineering Case #475135)================
In rare circumstances, a TLS connection that should have failed due to a
certificate validation failure, may have actually succeeded. This has been
fixed.
================(Build #3529 - Engineering Case #475039)================
If an application connected using jConnect 6.x, and attempted to query the
name of a date or time column using ResultSetMetaData.getColumnTypeName(),
then the server would have incorrectly returned null instead of the string
"date" or "time" respectively. The server now has support
for the new TDS DATE and TIME datatypes, but the jConnect metadata scripts
had not been updated to reflect the new support. The script jcatalog.sql
has been updated.
================(Build #3529 - Engineering Case #475038)================
Preparing the exact same statement before and after executing a SETUSER WITH
OPTION statement may have caused the second statement to not respect the
option change and behave incorrectly. This has been fixed.
================(Build #3529 - Engineering Case #475010)================
The server may have crashed while debugging a web service procedure from
within Sybase Central, if the server was shutdown while paused on a break-point.
This has been fixed. In addition, the canceled response will specify a Connection:
close and default to non-chunk mode Transfer-Encoding.
================(Build #3529 - Engineering Case #474724)================
When attempting to process transaction logs from an older version of ASA,
(such as with the Translation utility or SQL Remote), there was a chance
that it would have failed prior to processing the full log, with any number
of odd error messages. This problem has now been fixed.
================(Build #3528 - Engineering Case #474883)================
Connecting to a blank padded database, using either Open Client or jConnect,
and then attempting to fetch an NChar value, would likely have caused the
application to hang. This problem has now been fixed.
Note that this fetching nvarchar or long NVarChar data from a blank padded
database using a TDS client was not affected.
================(Build #3527 - Engineering Case #475032)================
Two new parameter values have been added to the system procedure sa_server_option()
to aid in locating references to database options in applications: OptionWatchList
and OptionWatchAction. OptionWatchList specifies a comma-separated list of
database options. OptionWatchAction specifies the action the server should
take when an attempt is made to set an option in the list. The possible values
for OptionWatchAction are 'message' (default) and 'error'. If OptionWatchAction
is set to 'message' and one of the options in the watch list is set, the
server will display a message in the server console window:
Option "<option-name>" is on the options watch list
If OptionWatchAction is set to 'error', an attempt to set an option in the
watch list will result in an error:
Cannot set database option "<option-name>" because it is
on the options watch list
The values of db_property('OptionWatchList') and db_property('OptionWatchAction')
can be used to determine the current settings.
Example:
call dbo.sa_server_option('OptionWatchList','automatic_timestamp,float_as_double,tsql_hex_constant')
Note that the size of the string specified for OptionWatchList is limited
to 128 bytes (the maximum for the "val" parameter to sa_server_option).
================(Build #3527 - Engineering Case #475009)================
If a web server set the HTTP option CharsetConversion to 'OFF' by calling
the system procedure sa_set_http_option(), (eg sa_set_http_option( 'CharsetConversion',
'OFF' ) from a stored procedure, the setting would have been ignored. This
has been fixed.
================(Build #3527 - Engineering Case #474726)================
If a query contained INTERSECT ALL or EXCEPT ALL and one of the branches
of the query expression was optimized by un-nesting an EXISTS predicate,
then the wrong answer could have been given. The INTERSECT ALL would have
been incorrectly treated as INTERSECT, and the EXCEPT ALL as EXCEPT.
For example, the following query shows this problem:
select D0.type_id
from sys.sysdomain D0
intersect all
select D1.type_id
from sys.sysdomain D1
where exists ( select * from sys.sysdomain D2 where D2.type_id = D1.type_id
)
This has been fixed.
================(Build #3527 - Engineering Case #474633)================
If the definition of a computed column included a reference to another table
column defined as NOT NULL, and an INSERT or UPDATE statement set the NOT
NULL column to NULL, then the computed column could have been evaluated to
an inconsistent result.
For example:
create table T( x int not null, y int null compute( if x is null then 123
else x endif ) )
go
CREATE TRIGGER "ChangeX" BEFORE INSERT, UPDATE ON T
REFERENCING NEW AS new_row
FOR EACH ROW
BEGIN
message string('Trigger: ',new_row.x,',',new_row.y);
if new_row.x is null then
SET new_row.x = 4;
end if;
END
go
insert into T(x) values( NULL )
After the above insert, T contains one row (4,NULL). The result for y is
inconsistent with the definition of the computed column.
The following non-fatal assertion failures could be reported in this case:
Assertion failure 106900 Expression value unexpectedly NULL in read
or
Assertion failure 106901 Expression value unexpectedly NULL in write
This has been fixed. An attempt to put a NULL value into a column declared
NOT NULL that is used in a computed column will result in an error message:
23502 -195 "Column '%1' in table '%2' cannot be NULL"
This checking is performed before triggers fire, and this represents an
intentional change in behaviour.
================(Build #3527 - Engineering Case #474597)================
The result set of SYS.SYSINDEXES was modified to include primary and foreign
key indexes in version 10 of SQL Anywhere. However, these indexes were not
distinguished from other indexes. The view definition has been changed so
that these indexes are identified by "Primary Key" and "Foreign
Key", respectively, by the indextype field.
================(Build #3527 - Engineering Case #469705)================
If an existing database that contained materialized views and indexes on
these views, was unloaded using the Unload utility or the DBTools function
DBUnload, then the definitions of these indexes was not unloaded. This problem
has now been resolved.
================(Build #3525 - Engineering Case #474324)================
If a stored procedure contained a cursor declaration for a call to an internal
stored procedure (e.g. sa_locks, sa_rowgenerator), then calling the procedure
would have resulted in the error "Invalid prepared statement type".
This has been fixed.
================(Build #3525 - Engineering Case #474314)================
Using or accessing a procedure or function that had been marked as hidden
could have caused the server to hang. This would only have happened if the
length of the obfuscated text was near, or larger than, a database pagesize.
This has been fixed.
================(Build #3525 - Engineering Case #474181)================
If the SQL Anywhere engine/server was shutdown by means of a SIGINT, SIGHUP,
SIGQUIT or SIGTERM then no reason for the shutdown would have been recorded
in the server log. This has been fixed -- an appropriate message will now
be output to the server log.
================(Build #3523 - Engineering Case #474753)================
Queries containing a predicate referencing a subquery may, in rare circumstances,
have failed with the message: "Assertion failed: 106105 (...) Unexpected
expression type dfe_Quantifier while compiling". This has been fixed.
================(Build #3523 - Engineering Case #474737)================
Queries containing a predicate using a view column which was a complex expression
(e.g., it contained a subselect) may have caused the server to fail an assertion.
Although this would likely have been rare, it has been fixed.
For example:
SELECT *
FROM
( SELECT DISTINCT DT1c2
FROM ( SELECT (select dummy_col from sys.dummy D3 ) as alias1,
( select row_num from dbo.rowgenerator R4 where row_num-1
= alias1 )
FROM dbo.rowgenerator R5
) DT1( DT1c1, DT1c2 )
) DT2
WHERE EXISTS(
SELECT 1
FROM (
SELECT 1
FROM sys.dummy D0
UNION ALL
SELECT 2
FROM sys.dummy D1
) DT3( "DT3c1")
WHERE DT2.DT1c2 >= DT3.DT3c1 <<<<<-------------
DT2.DT1c2 contains the subselect "( select row_num from dbo.rowgenerator
R4 where row_num-1 = alias1 )"
)
================(Build #3523 - Engineering Case #473990)================
If a statement contained a derived table or view that specified DISTINCT
and that derived table appeared on the NULL-supplying side of an outer join
and further the query optimizer selected the Ordered Distinct algorithm for
that derived table, then the statement could have failed with an assertion
failure:
Assertion failed: 106105 (...) Unexpected expression type dfe_PlaceHolder
while compiling
For example, the following statement could have this problem:
SELECT *
FROM
dbo.rowgenerator T1
LEFT JOIN
(
SELECT dummy_col c1
FROM sys.dummy T2
UNION ALL
SELECT 12345 c1
FROM ( select distinct id from sales_order_items order by id asc
) T3
) T4
ON 1=1
WHERE T1.row_num < 10
This has been fixed.
================(Build #3522 - Engineering Case #473834)================
If a statement contained a user-defined function that had an argument that
was a subquery, and the statement contained a union view with a predicate
comparing a column of the union view to the user-defined function, then the
statement could have failed with an inappropriate error. For example, the
following sequence would generate this inappropriate error.
create view V_shubshub( x ) as
select dummy_col from sys.dummy D3
union all
select dummy_col from sys.dummy D4
go
create temporary function F_tt(in a1 smallint)
returns decimal(16,5)
begin
return 0.0
end
go
SELECT *
FROM
(
select F_tt( ( select dummy_col from sys.dummy D1 ) ) x
from sys.dummy D2
) AS T1
WHERE NOT EXISTS(
SELECT 1
FROM V_shubshub T2
WHERE T1.x=T2.x
)
In version 8.0, the following error was returned:
-266 42W27 "QOG_BUILDEXPR: could not build sub-select"
in version 9.0 and later, the following error was given:
"Run time SQL error -- *** ERROR *** Assertion failed: 102604 (...)
Error building sub-select"
In versions prior to 8.0, it was possible for an incorrect answer to be
returned without an error being returned.
This has been fixed.
================(Build #3521 - Engineering Case #474599)================
If the database option Ansi_close_cursors_on_rollback was set to 'on', and
a cursor was opened on a stored procedure containing a declared cursor, a
ROLLBACK could have put the connection into a state where attempting to open
additional cursors would have failed with the message: "Resource governor
for 'cursors' exceeded". The ROLLBACK caused the cursors to be closed
in an order that resulted in the server trying to close a cursor twice. This
has now been fixed.
================(Build #3521 - Engineering Case #473461)================
Simple DELETE and UPDATE statements that bypass the optimizer could have
caused the server to crash when they were executed a second time after rstarting
the database. This would have occurred if the trigger used a row limit (TOP
n , FIRST,...), or the Transact SQL option rowcount was set to a non-zero
value. This has been fixed.
================(Build #3521 - Engineering Case #472773)================
If an event has a query that references a proxy table, and the remote data
access class was either SAJDBC or ASEJDBC, then the server would have crashed
once the event completed. This problem has now been fixed.
NOTE that there is no problem if the remote data access class is ODBC based.
================(Build #3520 - Engineering Case #473623)================
If two or more database mirroring servers running on Windows were started
at the same time, they could have failed to communicate properly. In this
situation, the last message displayed in the server console would have been
"determining mirror role ...". The servers would not have accepted
connections, and could not have been shut down. This has been fixed.
================(Build #3520 - Engineering Case #473562)================
It was possible, although rare, for a call to the system procedure sa_locks()
to have caused the server to crash. This crash was most likely to have occurred
when many users were connecting, and/or disconnecting, while sa_locks() was
being called. This issue has been fixed.
================(Build #3520 - Engineering Case #473550)================
A query with predicates in the WHERE clause of the form "column <
constant OR column > constant", would have returned rows with column
= constant. This only occurred if "column < constant" predicate
appeared before the OR and "column > constant" predicate was
after it. The predicate optimizer did not recognize that the two conditions
were disjoint and replaced them with "column is not null". This
has been corrected.
================(Build #3520 - Engineering Case #473431)================
For certain types of queries, it was possible for the statement to fail with
the non-fatal assertion failure 106105-"Unexpected expression type dfe_PlaceHolder
while compiling". If this happened, the statement was failed but the
server would continue processing further requests.
For example, the following statement could have exhibited this behaviour:
SELECT row_num
FROM rowgenerator R2
WHERE
( SELECT sum( 1 )
FROM sys.dummy D1
WHERE D1.dummy_col = R2.row_num
) >= 0
This has been fixed.
================(Build #3516 - Engineering Case #473187)================
The "Server name <name> already in use" message could have
contained incorrect characters instead of the name. In rare cases, this
could have caused the server to crash on startup if the server name was in
use. This has now been fixed.
================(Build #3516 - Engineering Case #466735)================
If an application connected to the server using jConnect 6.x, and attempted
to query a nullable long nvarchar column, then is was very likely the server
would have hung. The TDS datatype map was incorrectly resolving nullable
long nvarchar columns to itself, rather than resolving to nullable long binary
as per the TDS specification. This has now been fixed.
================(Build #3515 - Engineering Case #473062)================
The server would have crashed if an invalid reference was made to a function:
name..function() (note the two dots). This has been fixed.
================(Build #3515 - Engineering Case #472972)================
Calling the system procedure sa_describe_query() could have caused the server
to crash. This has now been fixed.
================(Build #3515 - Engineering Case #472922)================
In certain circumstances, a complex statement (large number of query elements)
could have caused the server to crash. This has been fixed.
================(Build #3515 - Engineering Case #472613)================
If a highly uncompressible string (i.e. one such that compression increases
the size of the data or reduces the size by less than about 14 bytes) was
inserted into a compressed column, fetching that value from the column would
have resulted in decompression errors. This was the result of an assumption
that the stored length of the column was less than or equal to the actual
length of the string. This has been fixed.
================(Build #3512 - Engineering Case #472829)================
If an application executes a Java stored procedure, and then the server crashed
(for an unrelated reasons) while the Java method was still executing, there
was a chance the JVM will not have shut down cleanly. This problem has now
been fixed.
================(Build #3512 - Engineering Case #472768)================
The Unload utility was generating the reload.sql script using newline (linefeed)
characters to separate lines on Windows, rather than the generally accepted
carriage return/linefeed characters. This has been fixed. On other platforms,
the script will continue to use newline characters as line separators.
================(Build #3512 - Engineering Case #472623)================
If a query was executed that used a merge join, and the null-supplying input
of the merge join was a derived table that contained an unquantified expression
in its select list and the join condition contained a non-equality predicate
referencing this unquantified expression, the statement would have fail with
the following message:
Assertion failed: 106105 (...) - Unexpected expression type dfe_FieldOrigRid
while compiling
For example, the following query could have generated this problem:
SELECT T1.col1
FROM
( select NULL col1, row_num col2,
(select dummy_col from sys.dummy ) col3
from dbo.rowgenerator
) T1
FULL JOIN
( select dummy_col c1, dummy_col+1 c2
from sys.dummy D1
) T2
ON (T1.col2=T2.c1) AND (T1.col3=T2.c2)
This has been fixed.
================(Build #3512 - Engineering Case #472034)================
Validating a table with full check, would have caused the server to crash
if the column order of a foreign key did not match the column order of the
referenced primary key. This has been fixed.
================(Build #3511 - Engineering Case #472645)================
If a transaction log file on the primary server in a database mirroring environment
was grown using ALTER DBSPACE TRANSLOG ADD, the statement would have been
sent to the mirror server, but would have been ignored. This has been fixed.
The log on the mirror will now be correctly grown when the mirror server
receives this statement from the primary server.
================(Build #3511 - Engineering Case #472620)================
Mini core files generated on Linux as a results of a crash are too big. The
server was including executable segments in the core files. This has been
fixed by discarding executable segments, and any data segments that exceed
2MB in size.
================(Build #3511 - Engineering Case #472508)================
If a query contained an outer join that had a derived table on the null-supplying
side, and the derived table contained a complex expression and the join condition
contained a non-equality predicate that referenced this complex expression
that was further used after the join, then the statement could have failed
with a non-fatal assertion failure error:
Assertion failed: 102501 (...) Work table: NULL value inserted into not-NULL
column
For example, the following statement could cause this problem:
with D(c1,c2) as (
select row_num-1, 0
from dbo.rowgenerator R1
where R1.row_num <= 101
union all
select 123456789, 0
from sys.dummy where 1=0
)
select D1.c1, D1.c2 * 23 x, D2.c1
from D D1 left join D D2 on D1.c1 = D2.c1 and D2.c1 = 3 and x - D2.c2 =
0
This has been fixed.
================(Build #3511 - Engineering Case #472507)================
If a statement contained a hash join with a complex ON condition containing
non-equality predicates that should not be executed in parallel, then it
was possible for the join to be incorrectly costed by the optimizer as if
it were executed in parallel. This could have lead to sub-optimal execution
strategies, a server crash or a wrong answer. This has now been fixed.
For example, the following statement could exhibit this problem:
select *
from sysdomain D1 left join sysdomain D2 on D1.type_id = D2.type_id and
0 = ( select D1.type_id - D2.type_id from sys.dummy )
================(Build #3511 - Engineering Case #472505)================
While performing a backup, the server could have crashed in a high availability
environment if one or more of the transaction logs was deleted. It was possible
to truncate a transaction log when the database file was mirrored, but this
is no longer the case. An error is now returned when attempting to truncate
a transaction log while it's involved in mirroring.
================(Build #3510 - Engineering Case #472510)================
A server started with several databases could have crashed on shutdown if
more than one alternate server name was used. This has been fixed.
================(Build #3510 - Engineering Case #472482)================
If a cursor was opened with a query that referenced proxy tables, and attempted
to refetch a row, then the server would have failed assertion 101701. This
problem has now been fixed and the server now correctly gives an error indicating
that cursors involving remote tables are not scrollable.
================(Build #3510 - Engineering Case #472479)================
If a cursor contained a LONG VARCHAR or LONG NVARCHAR column, and it was
retrieved using a GET DATA request, then the request could fail with the
error: -638 "Right truncation of string data" In some cases, this
situation could have caused the server to crash. This has now been fixed.
================(Build #3510 - Engineering Case #472400)================
In some cases, if a query contained an unflattened derived table on the null-supplying
side of an outer join and the derived table contained a constant, then the
statement could have failed with an assertion failure message:
Run time SQL error -- *** ERROR *** Assertion failed: 106105 (...)
Unexpected expression type dfe_PlaceHolder while compiling (SQLCODE: -300;
SQLSTATE: 40000)
For example, the following query demonstrates this problem:
SELECT *
FROM SYS.dummy D1
LEFT JOIN
( select c1
from ( select distinct D2.dummy_col, 31337 c1
from dbo.rowgenerator R1 , sys.dummy D2
) DT1
) DT2
ON (D1.dummy_col=DT2.c1-31337)
This has now been fixed.
================(Build #3510 - Engineering Case #472071)================
If a query contained a recursive union that appeared on the right side of
a nested loops join and the recursive union selected a hash execution strategy,
then the statement could have failed with the following error:
Run time SQL error -- *** ERROR *** Assertion failed: 106105 (...)
Unexpected expression type dfe_PlaceHolder while compiling [-300] ['40000']
For example, the following statement could cause this error.
with recursive Ancestor( child) as
( (select row_num from rowgenerator R1 where R1.row_num = 1 )
union all
(select R2.row_num+100
from Ancestor, rowgenerator R2
where Ancestor.child = R2.row_num
)
)
SELECT *
FROM sys.dummy D1, Ancestor
for read only
In the case of this error, the server would continue to process other statements
(a non-fatal assertion failure). This has been fixed.
================(Build #3509 - Engineering Case #472484)================
If an application made a request that executed a JAVA call, and then attempted
to shut down the database while the JAVA call was still active, then the
server would either have hung or crashed. This problem has now been fixed.
================(Build #3509 - Engineering Case #472390)================
If a statement other than a SELECT statement converted a value of type NUMERIC
or DECIMAL to a string, then the conversion could have been incorrect. There
could have been insufficient trailing 0 characters for the declared scale
of the value. This has been fixed.
For example, the following batch would have returned 12.34 and 12.34000,
now the two values should both be 12.34000:
begin
declare @num numeric(10,5);
declare @str char(20);
set @num = 12.34;
set @str = @num;
select @str, cast( @num as char(20) );
end
Further, the data type of division in the NUMERIC/DECIMAL domain could have
been incorrect for a non-SELECT statement. This has been fixed as well.
================(Build #3509 - Engineering Case #472386)================
If a database containing global autoincrement columns was rebuilt, and the
setting of the global_database_id option was 0, the next available value
for these columns stored in SYS.SYSTABCOL.max_identity was not set. This
has been fixed. As a workaround, execute the following after rebuilding the
database:
set option public.global_database_id=1;
set option public.global_database_id=0;
================(Build #3509 - Engineering Case #471644)================
In some situations, if a statement contained a subselect in the SELECT list
of a query block that was aliased, and the alias was referred to in the WHERE
clause of the query, then the statement would have failed with the following
error:
*** ERROR *** Assertion failed: 106104 (...) Field unexpected during compilation
(SQLCODE: -300; SQLSTATE: 40000)
This has been fixed.
================(Build #3509 - Engineering Case #450135)================
Reorganizing an index could have caused the server to fail an assertion,
or to have crashed. This was unlikely to occur unless the server was heavily
loaded. This has now been fixed.
================(Build #3508 - Engineering Case #472078)================
The server could have crashed if a MESSAGE statement contained certain types
of substr or byte_substr expressions. These expressions must have used a
negative start value that specified a starting point before the beginning
of the string for a crash to have occurred. For example: "message byte_substr(
'a', -5, 1 )". This has been fixed.
================(Build #3508 - Engineering Case #472073)================
Delete statements that do not bypass optimization, and load CHECK constraints,
would have failed if there was a syntax error in CHECK constraint definition.
This has been fixed.
================(Build #3508 - Engineering Case #471981)================
Attempting to create proxy tables to an Oracle server in an SA database with
a mult-byte character set, would likely have failed with a "table not
found" error if the table owner and name were not specified in uppercase
in the create existing table location string. This problem has now been fixed.
================(Build #3508 - Engineering Case #471765)================
If a table had a column of type uniqueidentifier, varbit, long varbit, xml,
nchar, nvarchar or long nvarchar, the column would not have been in the result
set of the system procedure sp_columns. This has been fixed. To get this
fix into an existing database, the database needs to be rebuild.
================(Build #3506 - Engineering Case #471696)================
The 32-bit and 64-bit versions of the SQL Anywhere library that provides
support for the Windows Performance Monitor could not coexist. If the 64-bit
library was registered last, only the 64-bit version of the Performance Monitor
would have been able to monitor SQL Anywhere servers (both 32-bit and 64-bit
servers). If the 32-bit version was registered last, only the 32-bit version
of the Performance Monitor would have been able to monitor SQL Anywhere servers
(both 32-bit and 64-bit servers). The area of the registry that contains
performance monitoring information and, in particular the path to the support
library, is shared by both 32-bit and 64-bit applications, unlike other areas
of the registry which have been split into 32-bit and 64-bit versions. The
problem has been fixed by having the 32-bit and 64-bit support libraries
register using different service names.
================(Build #3505 - Engineering Case #471431)================
If an "UPDATE OF column-list" trigger was defined, the trigger
could have been fired in some cases where no column in the associated list
had in fact been modified.
BEFORE UPDATE triggers should fire if a column in the column-list appears
in the SET clause of the associated UPDATE statement, while AFTER UPDATE
triggers should fire if the value of a column in the column-list has been
modified by the UPDATE statement.
A positioned update would have caused all BEFORE UPDATE and AFTER UPDATE
triggers to fire regardless of whether a column in their column-list appeared
in the SET clause or was modified by the update. Further, if a searched update
included all columns in the SET clause, but did not modify the value of any
column, AFTER UPDATE triggers would have been fired inappropriately.
This has been fixed.
================(Build #3505 - Engineering Case #471415)================
If the public option MAX_TEMP_SPACE was inadvertently dropped (ie set public.MAX_TEMP_SPACE=),
the server would have crashed the next time it checked the free temp space.
This has been fixed.
================(Build #3505 - Engineering Case #471413)================
If a BACKUP statement was executed that used one of the TRANSACTION LOG backup
options, and the database did not use a transaction log file, then the server
would have crashed. This has been fixed. The server now returns the error
"Syntax error near 'backup option'".
================(Build #3505 - Engineering Case #471326)================
The execution time reported in the request-level log for a statement could
have been much to high. This would only have happened in the second or subsequent
request-level logs that were split using the -on or on command line options.
This has been fixed.
================(Build #3505 - Engineering Case #471293)================
An HTTP request made to a DISH service may have caused a server crash if
the DISH service exposed a SOAP service with the following characteristics:
- the SOAP service was created with DATATYPE ON or IN, specifying that parameter
types are to be mapped from SQL to XMLSchema data types (rather than exclusively
to xsi:string),
- one or more parameters within the stored procedure called by the SOAP
service were (SQL) XML data types.
This has been fixed. VARCHAR, LONG VARCHAR, and XML are mapped to XMLSchema
type STRING (xsi:string). Most client toolkits including the SQL Anywhere
SOAP client will automatically HTML_ENCODE data types that map to xsi:string.
================(Build #3505 - Engineering Case #467122)================
The execution time of the CREATE TABLE statement did not scale very well
as the number of columns being created increased. The statement could have
taken a significant amount of time to create tables with thousands of columns.
The performance of the server has been improved so that the CREATE TABLE
statement behaves more gracefully as bigger tables are created.
Note, this also addresses the problem where deleting a column from a table
via the ALTER TABLE statement would have caused a syntax error for values
that already existed in other columns.
================(Build #3504 - Engineering Case #471148)================
When the server was run on Linux x86 or x86-64 systems, in an X-Window environment
where GTK libraries were available, and the command line options -ui or -ux
were specified when the GUI console was invoked or when running the Developer
Edition regardless of command line options, it may have crashed on shutdown.
The integrity of the data in the database was not compromised. This has been
fixed.
================(Build #3504 - Engineering Case #471110)================
If an event handler attempted to create a temporary table via SELECT ...
INTO #temptab, or it attempted to create a view and the SELECT list included
a string column with CHAR length semantics, the server would have crashed.
This has been fixed.
================(Build #3503 - Engineering Case #471006)================
EXECUTE permission was not granted to the group SA_DEBUG for the system procedures
sa_proc_debug_version and java_debug_version. If a user that was granted
membership in SA_DEBUG was not also a DBA, attempting to use the procedure
debugger would have failed and possibly caused Sybase Central to crash. This
has been corrected for newly initialized databases. For existing databases,
the permissions can be granted by executing the following:
grant execute on dbo.sa_proc_debug_version to sa_debug
go
grant execute on dbo.java_debug_version to sa_debug
go
================(Build #3503 - Engineering Case #470313)================
A query may have failed with the message 'Assertion failed: 106104 "Field
unexpected during compilation"' when a simple select that should bypass
the optimizer was executed. The problem would only have occurred if the select
used a unique index and an additional search condition contained a column
reference on the right hand side.
For example:
create table T1 ( pk int primary key, a int, b int );
select * from T1 where pk=1 and a=b;
This has been fixed.
================(Build #3502 - Engineering Case #470559)================
The server could have crashed when starting up a database, if the start event
of the database contained an HTTP/SOAP call. This has been fixed.
================(Build #3502 - Engineering Case #470546)================
A query with a procedure call in the FROM clause, that had a column or variable
parameter that matched the name of one of the procedure's own result columns,
would have failed with the error -890 "statement size or complexity
exceeds server limits". This is fixed.
================(Build #3501 - Engineering Case #470414)================
If a Remote Data Access server was created where the underlying ODBC driver
was not a Unicode driver, then using Remote Procedure Calls (RPC) with that
remote server would not have worked correctly. This problem has now been
fixed.
================(Build #3500 - Engineering Case #470292)================
Under heavy load the server could have crashed without any assertions in
the system log or the console log. This has been fixed.
================(Build #3500 - Engineering Case #470195)================
If an event was fired using the TRIGGER EVENT statement with a list of event
parameter values, the server could have crashed when the event handler attempted
to access the parameters via the event_parameter() function. This has been
fixed.
================(Build #3500 - Engineering Case #470173)================
After using Sybase Central to create a proxy table to a table in Paradox,
fetching from that table would have failed with a "could not find the
object" error message. The method Sybase Cenral used to qualify the
table name with the database name was not supported by Paradox. This problem
has been fixed by implementing a work around in the server to modify the
qualified table name so that it is supported by Paradox.
================(Build #3500 - Engineering Case #468358)================
If the server returned the error "Dynamic memory exhausted", it
will be followed by diagnostic information about the cache usage that is
printed to a file. During this diagnostic printing the server may have crashed.
This has been fixed.
================(Build #3499 - Engineering Case #469973)================
On Windows Vista systems, the server would have left a minimized command-shell
window visible in the Windows taskbar when starting Java in the database.
Self-registering DLLs, such as the ODBC driver or the dbctrs perfmon plugin
DLL, could also have shown a minimized command-shell window, but it was very
short-lived.
Both these problems have now been corrected
================(Build #3499 - Engineering Case #469523)================
If a view was updated with an UPDATE statement that contained a VERIFY clause,
the server may have crashed or incorrectly used the VERIFY clause. This would
only have happened if the view and the base table had a different column
ordering in the select list. This has been fixed.
================(Build #3498 - Engineering Case #468726)================
The server could have returned incorrect results, or in some cases crashed,
when executing statements with predicates of the form "expression compare
(subquery)". This has been fixed.
Note that BETWEEN predicates involving subqueries also qualify as they are
interpreted as two compare predicates (see example below).
Example:
select s_suppkey,
s_name,
s_address,
s_phone,
total_revenue
from supplier, revenue1
where s_suppkey = supplier_no
and ( select max(total_revenue)
from revenue1
) between (total_revenue-0.01) and (total_revenue+0.01)
order by s_suppkey
================(Build #3496 - Engineering Case #469685)================
The server could have incorrectly accepted an outer reference as a parameter
to the ROWID() function, without throwing an error. For example, the following
query would not have returned an error:
SELECT * FROM table1, some_procedure( ROWID( table1 ) )
The behaviour of such a function call was undefined; a query containing
an illegal outer reference may have returned unintuitive or inconsistent
results. This has been corrected so that the server now correctly returns
an error when the ROWID() function is called with an illegal outer reference
as an argument.
================(Build #3496 - Engineering Case #469436)================
The server would have incorrectly marked all updates applied by SQL Remote
as having caused a conflict when being applied to the consolidated database.
This would have caused the resolve update trigger to fire when there was
in fact no conflict, and would also have caused the update that was just
applied to the consolidated to be echoed back to the remote database. This
would have caused the row at the remote database to be temporarily set back
to an older value. This problem has now been fixed so that the server will
properly detect conflicts at the consolidated database.
================(Build #3496 - Engineering Case #469259)================
The server may have crashed during the execution of an IN predicate of the
form "{column} IN ( const, ...)". This would have occurred if the
IN predicate was part of a control statement's condition, or part of the
WHEN clause of a trigger definition; or at least one IN list entry needed
an implicit type conversion in order to perform the compare. This has been
fixed.
================(Build #3495 - Engineering Case #467259)================
The server could have crashed when a large number of connections were concurrently
executing INSERT or UPDATE statements on tables with CHECK constraints. This
was more likely to occur on multi-processor machines. This has been fixed.
================(Build #3494 - Engineering Case #468148)================
In some cases, executing queries that contained more than one procedure in
the FROM clause could have caused the server to crash. This has been fixed.
================(Build #3494 - Engineering Case #463318)================
On Windows CE, when starting a second server on the device, the window of
the first server is normally displayed. This should not have been the case
though when the first server was started with the command line options -qw
and -qi to hide the window and the icon resectively. This has been corrected
so that the first server's command line is honored and the first server's
window is no longer displayed when attempting to start a second server on
the device. Starting the second server will now fail silently in this instance.
================(Build #3493 - Engineering Case #468864)================
Servers running on Unix systems, and attempting to use an alternate server
name, may have failed to start, giving the error "Alternate server name
is not unique", even though no other server on the machine or network
was using that server name. This has been fixed.
================(Build #3491 - Engineering Case #468462)================
If BCP IN was used to populate a table owned by a user other than the connected
user, it would have failed with either a 'table not found' error, or the
server could have crashed. In some cases, if the connected user also owned
a table with the same name as the table being populated, then the server
would have attempted to add rows to the wrong table. This problem has now
been fixed.
================(Build #3491 - Engineering Case #468343)================
If an application attempted to connecting using the TCP parameter DoBroadcast=None
and specified an alternate server name, rather than the real server name,
in the ENG parameter, the connection would have failed. This has been fixed.
Note that the fix requires both an updated server and updated client library.
================(Build #3490 - Engineering Case #468319)================
It was possible, although very unlikely and timing dependent, that the server
could have hung, or even less likely crashed, when a connection was disconnected.
This has been fixed.
================(Build #3490 - Engineering Case #465040)================
When the server was converting the WHERE, ON or HAVING clauses to Conjunctive
Normal Form, and discovered at least one simplified or implied condition,
then it was possible that the resulting query was not equivalent to the original
query and therefore did not return the same result set.
For example, the following should return 1 row, but did not return any rows:
create table T1 ( d int, e int, f char(5) ,g char(5) );
insert into T1 values ( 5, 2, '4E', 'N' );
select * from T1
where ( d is not null or d is NULL )
and ( e = 2 or e is NULL )
and ( g <> 'Y' or g is null )
and e = 2
This has been fixed.
================(Build #3489 - Engineering Case #467800)================
Attempting to create a proxy table to a table that has a computed index on
a remote Oracle server, would have failed with the error "column not
found". As part of creating a proxy table to an existing remote table,
the server will also attempt to create indexes on the proxy table to match
the indexes on the remote table. In this case, the name returned for the
computed column was not an actual column name, but an expression name. The
server now verifies that the column name truly is the name of a column in
the proxy table when automatically creating indexes on proxy tables.
================(Build #3489 - Engineering Case #467747)================
A second situation similar to Engineering case 467468 was found where the
server could have crashed, or became deadlocked, when using snapshot isolation.
As with the previous issue, this could only have happened if the snapshot
transaction made use of a long index value. This has now been fixed as well.
================(Build #3488 - Engineering Case #467652)================
If a procedure had a query that referenced a table that was then modified
by an ALTER TABLE statement, later execution of the procedure could have
caused the server to crash. This has been fixed.
See also Engineer case 443016
================(Build #3488 - Engineering Case #467590)================
If concatenation was performed in procedural code (either using the concatenation
operator or string() function), and one of the arguments had an error, then
the server could have crashed. This has been fixed.
================(Build #3488 - Engineering Case #467587)================
The server may have crashed if it was not able to get the network interfaces
from the operating system. This has been fixed.
================(Build #3488 - Engineering Case #467586)================
After a failed integrated login, calls to the system function EVENT_PARAMETER(
'User' ) in the ConnectFailed event, could have returned garbage characters.
This has been fixed so that calls to EVENT_PARAMETER( 'User' ) will now return
the empty string if the user is not known.
================(Build #3488 - Engineering Case #467580)================
If a function was declared non-deterministic and it was used in a SELECT
INTO (variable) statement, then the function could have been executed extra
times. For this to have occurred, the function reference must not have contained
any column references. This problem could also have occurred for SELECT statements,
if a column were fetched using GET DATA requests. This problem has been fixed.
================(Build #3488 - Engineering Case #467495)================
It was possible for an UPDATE or DELETE statement executed at isolation level
0, to have been blocked on a row lock for a row that was not actually affected
by the statement. For this to have occurred, the row of the table must have
matched all of the local simple predicates for the table, but be rejected
by a later subquery or join predicat, or a predicate involving a user defined
function. This behaviour has been changed. Now, UPDATE and DELETE statements
at isolation level 0 will only take INTENT or EXCLUSIVE locks on the rows
that are actually modified by the statement. Further, this change adjusts
the locking behaviour at isolation level 1 so that it is less likely for
an INTENT or EXCLUSIVE lock to be taken on a row not affected by an UPDATE
or DELETE statement, but this is not guaranteed at isolation level 1. With
isolation levels 0 and 1, update anomalies may occur and not all anomalies
are prevented by the locking mechanism. Application developers should use
caution when using isolation level 0 or 1 with UPDATE and DELETE statements
to ensure that the semantics are acceptable to them.
================(Build #3488 - Engineering Case #467269)================
For some input values, the UNISTR() function could have entered an endless
loop. This has been fixed.
================(Build #3488 - Engineering Case #466968)================
Repeated concatenation of a string to another concatenation expression, could
have caused the server to use excessive amounts of cache memory, eventually
resulting in a stack overflow or a crash. For this to have occurred, a concatenation
expression must have repeatedly been performed as the right hand argument
of a concatenation expression.
For example:
declare @var long varchar;
declare @counter integer;
set @counter = 1;
set @var = space(30);
calcloop:
WHILE @counter < 100000 LOOP
set @var = 'string' || @var;
set @counter = @counter + 1;
END LOOP calcloop;
The error did not occur if only the left hand argument of a concatenation
expression was a concatenation expression, i.e., set @var = @var || 'string';
This has been fixed.
================(Build #3487 - Engineering Case #467970)================
In the presence of a thread deadlock error, an HTTP or SOAP stored procedure
or function could have caused the server to appear to hang indefinitely.
This problem would only have occurred if the HTTP or SOAP procedure or function
being initiated happened to be the last unblocked database request task (i.e.,
on a server with a -gn value of x, x-1 request tasks would need to be blocked
already). This has been fixed.
================(Build #3487 - Engineering Case #467468)================
The server could have crashed, or became deadlocked, when using snapshot
isolation. This could only have happened if the snapshot transaction made
use of a long index value. This has now been fixed.
================(Build #3487 - Engineering Case #467446)================
If auditing of database activity was enabled, certain failed connection attempts
could have caused the server to crash. This has been fixed.
================(Build #3486 - Engineering Case #467437)================
Referencing a column as "IDENTIFIER" .. "IDENTIFIER"
(note: two dots) could have caused the server to hang. In cases where the
server did not hang, the first identifier would have been ignored. A similar
problem existed for columns referenced as "IDENTIFIER" . "IDENTIFIER"
.. "IDENTIFIER".
For example, the following script would have caused a server hang:
CREATE TABLE T1 (
x char(4) NOT NULL,
y char(13) NOT NULL,
z char(5) NOT NULL,
);
CREATE TABLE T2 (
w char(5) NOT NULL,
);
SELECT a.x as x, a.y as y,
count(b.w)
FROM T1 a,
T2 b
WHERE a.z = b.w
Group by a.y, a..x ; -- Note extra '.'
Now, the server will generate an error if a query contains a column expression
of the form
"IDENTIFIER" .. "IDENTIFIER"
or
"IDENTIFIER" . "IDENTIFIER" .. "IDENTIFIER".
================(Build #3486 - Engineering Case #466044)================
If the server was started with the -qw ("do not display database server
screen") or -qi ("do not display database server tray icon or screen")
command line options, certain messages, such as checkpoint messages, would
not have been written to the output log or to the internal message list (which
can be queried with the property functions). Services on Windows Vista, as
well as services on XP which did not allow interaction with the desktop,
may have behaved as if -qi was specified. This has been fixed so that all
messages now always go to the server output log and internal message list.
On Windows, the -qw switch now completely suppresses the creation of the
server window. As before, a systray icon is created but that icon is now
created immediately upon startup. Previously, a minimized server window was
created momentarily before the systray icon was created. In 9.0.2, the menu
for the systray icon allowed the user to open the server window via the "Restore"
menu item but "Restore" has now been disabled (which is consistent
with 10.x & the fact that -qw is supposed to prevent the creation of
a server window).
On UNIX, the -qw switch suppresses all messages from going to the console
after the 'press q to quit' message has been displayed.
================(Build #3485 - Engineering Case #467128)================
The following system stored procedures no longer require DBA authority by
default:
sa_dependent_views
sa_get_dtt
sa_check_commit
sa_materialized_view_info
================(Build #3485 - Engineering Case #466873)================
If an application made a remote procedure call to a stored procedure in a
Microsoft SQL Server database, and one of the arguments to the stored procedure
was a string argument of type Input with the value of '' (empty string),
then the RPC would have failed with an 'invalid precision value' error. This
problem has now been fixed.
================(Build #3485 - Engineering Case #466566)================
If a cursor was opened on a statement containing a comparison predicate of
the form "T.x = val", where val was NULL, and, further, the statement
was opened with the Rowcounts option ON, or scrolling forward and backward
was performed, then the server could have given the wrong answer (rows that
did not match the predicate), or in some cases a crash was possible. In order
for the problem to appear, the value had to have specific characteristics
(for example, variable within a stored procedure, or an unquantified function).
This problem has been fixed.
================(Build #3485 - Engineering Case #395207)================
The ALTER TABLE statement can be used to change the nullability of a column.
If an attempt was made to change a column from "nulls not allowed"
to "null allowed" in conjunction with specifying the datatype of
the column, as in the following example:
ALTER TABLE t ALTER c1 int NULL
The server would have ignored the NULL, and left the column as "null
not allowed". This has been corrected so that the server will now change
the column specification to "null allowed" in this case. A work
around is to not combine NULL with the datatype for the column as in:
ALTER TABLE t ALTER c1 NULL
================(Build #3484 - Engineering Case #466829)================
If the server was in the process of starting up or shutting down when the
machine was put into the hibernate state, the server may then have crashed
when the machine came out of hibernation. This has been fixed.
================(Build #3484 - Engineering Case #466491)================
The server could have failed with various assertion failure errors when rebuilding
a database with an invalid foreign key definition. For example, if the foreign
key trigger action had the column being set to NULL, based on some event
and the column definition did not allow NULLs, then this could have occurred.
A database could have gotten into this state if at the time the foreign key
was created the definition was valid but the column definition changed at
a later date. The proper error is now returned indicating the reason the
key is now invalid.
================(Build #3483 - Engineering Case #466203)================
If a trigger encountered an error due to a statement referencing a table
that was exclusively locked by another connection, a subsequent execution
of the trigger could report "Invalid statement". This has been
fixed.
================(Build #3483 - Engineering Case #465178)================
As of version 10.0.0, Remote Data Access no longer worked with an ODBC driver
that did not support UNICODE. This has now been resolved, and Remote Data
Access is now possible with non-UNICODE ODBC drivers. It should be noted
though that data coming from non-UNICODE ODBC drivers will not undergo any
character set translation.
================(Build #3482 - Engineering Case #466293)================
A server with active HTTP connections could have hung indefinitely during
shutdown. At the point of the hang, the server had completed stopping all
databases. This has been fixed so that the server shutdown will not hang,
although in rare cases it could still take up to about 20 seconds after the
databases have been stopped to complete the shutdown process.
================(Build #3482 - Engineering Case #466188)================
When unloading a database created using a version prior to 10, using the
version 10.x Unload utility , the unload may have failed with the error "SQL
error: Not enough memory to start". This would only have occurred if
one of the dbunload command line options -ac, -ar or -an was used, either
the unload engine or the reload engine (or both) had to be autostarted and
was the 32 bit server, and the machine had sufficient RAM such that when
dbunload requested a cache size of 40% of available memory, this was greater
than the maximum allowable non-AWE cache size on the given platform. This
would have meant a memory size of about 8 GB on Unix and 32-bit Windows platforms,
and 10 GB on 64-bit Windows. This has been fixed.
A workaround is to specify the undocumented dbunload options -so and -sn,
to set the options for the unload and reload servers respectively, as follows:
dbunload -so " -ch 40p" -sn " -c 40p" ...
Note the space before the -c switches, these are required. Mnemonic: -so
sets the additional switches for the "Old" server, -sn sets the
additional switches for the "New" server.
================(Build #3482 - Engineering Case #466091)================
The Unload utility (dbunload) may have crashed when attempting to rebuild
a database prior to version 10, if a version 10 server that disallowed connections
to the utility_db was already running. Thia has been fixed.
Note, in general it is good practice to insure that no version 10 server
is running when rebuilding databases prior to version 10.
================(Build #3482 - Engineering Case #465003)================
If multiple connections using Remote Data Access attempted to read from the
same set of tables in a Microsoft SQL Server database, then the connections
would have been serialized. This was due to the fact that Remote Data Access
always used pessimistic locking when connecting to Microsoft SQL Server.
This has now been fixed so that connections to Microsoft SQL Server are no
longer serialized.
================(Build #3481 - Engineering Case #466070)================
Connection attempts to the utility_db would have failed, throwing an exception
that setting options was not allowed in the utility_db. As the SET OPTION
statement is not valid in the utility_db, the AsaConection method now checks
if the database is the utility_db before issuing a SET TEMPORARY OPTION statement.
================(Build #3478 - Engineering Case #466559)================
If a LOAD TABLE statement attempted to load a NUMERIC value that was too
large for the target column, an invalid value would have been loaded. Now
a conversion error is reported.
================(Build #3478 - Engineering Case #465814)================
Performing a LOAD TABLE into a temporary table that already contained rows
could have crashed, if the table had no primary key. This has been fixed.
================(Build #3478 - Engineering Case #465695)================
After connecting using embedded SQL, some user options may not have been
respected until the first SET OPTION was performed. Public options may have
been used instead of any user options (i.e. options set with "SET OPTION
user.option = value" may not have been respected.)
After executing a SETUSER WITH PERMISSIONS userid statement, some of the
current options for userid may not be respected. Similarly, the next SETUSER
command which should have set the options back, may have caused the original
options to not be respected.
Both of these problems have now been fixed.
================(Build #3478 - Engineering Case #465530)================
If the filename of a transaction log, including its full path, was exactly
70 bytes, and the Backup utility was used to do a backup, the server would
have failed to truncate the log when the -x option (delete and restart the
transaction log) was specified. This has been fixed.
A workaround would be to use the BACKUP DATABASE statement with the TRANSACTION
LOG TRUNCATE clause to truncate the log.
================(Build #3478 - Engineering Case #464446)================
Connecting to a version 10 server with a version 9 or older client, could
have caused the server to ignore the Language connection parameter, or in
rare cases, caused the server to crash. The language used by the connection
would then have been the server's default language. This has been fixed.
================(Build #3477 - Engineering Case #462945)================
When run on NetWare systems, unloading the server using the NetWare console
(i.e. "UNLOAD DBSRV9") when there were still active connections
to the server, may have caused the server to abend or hang. This has been
fixed.
================(Build #3477 - Engineering Case #462516)================
The server may have failed to detect that a corrupt database page was read
from disk, and not stopped with an assertion failure. This has been corrected.
================(Build #3477 - Engineering Case #445626)================
Attempting to start a database with the wrong log file could have failed
with the error "Unable to start specified database: ??? belongs to a
different database". This has been fixed so that the log file name now
correctly appears in the error message.
================(Build #3476 - Engineering Case #392468)================
When an arithmetic operation generated an overflow error, the error message
did not show the correct value if that value did not fit into the destination.
For example:
select 100000 * 100000
Previously, this returned an error message "Value 1410065408 out of
range for destination". Now, the error message is the following: "Value
100000 * 100000 out of range for destination".
Further, after this change string values included in conversion or overflow
error messages are enclosed in single quotes. If the string is truncated,
an ellipsis (...) is used to indicate this.
For example:
select cast( '0123456789012345678901234567890123456789' as int )
would have given the error message: "Value 01234567890123456789012345678
out of range for destination" Now the error message is: "Value
'0123456789012345678901234567890123...' out of range for destination".
Similarly, an ellipsis is now used when printing binary values.
When NCHAR values were printed for error messages or plain text (explanation(),
plan(), or graphical_plan()), the text was not represented correctly.
For example:
select cast( unistr('\u00df') as int )
would have caused the error message: "Cannot convert 'ß' to a int".
The text in the message was the result of cast( cast( unistr('\u00df') as
binary ) as char). Now, the error is: "Cannot convert 'ß' to a int".
The error message is now formed by converting from NCHAR collation to the
database CHAR collation. If there are characters that can not be represented
in the CHAR collation, they are replaced with substitution characters.
Also, when certain values were included in the result of a graphical_plan(),
they could have generated an invalid XML result. For example, the following
query previously generated an 'Invalid plan' error: select 'a\x1ab'. Characters
that contain a byte value < 0x20 are now escaped and printed as '\xHH'
where HH is the two-digit hex code for the byte. For example, in the "Select
list" section of the graphical plan, the escaped text is now shown.
================(Build #3475 - Engineering Case #465371)================
If a procedure, trigger, batch or event contained a string which was continued
across multiple lines, and there was a syntax error in the procedure at some
point after the string, the wrong line number would have been reported in
the error message. This has been fixed.
================(Build #3475 - Engineering Case #465161)================
While rare, the server could have crashed on shutdown. This would only have
been seen if more than one external function had been called, and the calls
were either cancelled before the functions completed, or the database tasks
servicing the request had received a thread deadlock error before the functions
completed. This has been fixed.
================(Build #3474 - Engineering Case #464681)================
Servers running on Unix systems, and experiencing a high volume of HTTP connections,
may have hung. This has been fixed.
================(Build #3474 - Engineering Case #464669)================
In a database mirroring environment, an operational server may have crashed
when first establishing a connection to another partner or the arbiter. This
has been fixed.
================(Build #3474 - Engineering Case #464321)================
It was possible for the server to terminate the connection when an attempt
was made by the application to cancel a request. This has been fixed so that
the request is correctly cancelled and the connection is not terminated.
================(Build #3474 - Engineering Case #464201)================
The runtime server does not support execution of procedures; however, it
still attempted to execute a login procedure if one was defined using the
PUBLIC.login_procedure option. That attempt would always have failed and
caused a message to be displayed in the server console window for each connection:
Login procedure 'sp_login_environment' caused SQLSTATE '0AW04'
Triggers and procedures not supported in runtime server
This has been corrected so that the runtime server will no longer attempt
to invoke the login procedure.
A workaround would be to execute:
set PUBLIC.login_procedure=''
================(Build #3474 - Engineering Case #463912)================
A query with an ordered GroupBy operator immediately above an Exchange operator
in the plan could have returned incorrect results. This has been fixed.
================(Build #3474 - Engineering Case #463882)================
If an application, connected via the iAnywhere JDBC Driver, fetched a timestamp
value using the ResultSet.getTimestamp() method, then the fractional portion
of the returned timestamp would have been incorrectly truncated to millisecond
precision. This problem has now been fixed and the fractional portion of
the timestamp is now the expected nanosecond precision.
================(Build #3474 - Engineering Case #463763)================
When using BCP IN to populate a table which had a nullable date or time column,
the BCP utility would have given a 'NULLs not allowed' error if the data
file inserted a NULL into the date or time column. The server was was incorrectly
describing all date and time columns to the BCP utility as non-nullable,
even if they were nullable. This problem has now been fixed.
================(Build #3474 - Engineering Case #463669)================
If the database option Percent_as_comment was set to 'OFF', statements such
as:
select 11 % 2
would have incorrectly reported an error, instead of the correct result
of 1 being returned. This has been fixed.
================(Build #3474 - Engineering Case #463586)================
If a Remote Data Access server was created for a remote database with a different
charset than the local database, then it was possible that a string that
fit in a char(n) or varchar(n) column on the remote would not have fit into
a char(n) or varchar(n) column when inserted into a local table. The reason
being that performing charset translation on the remote string would have
yielded an equivelent string in the local database's charset, but the string
would have required more space in the local charset. Unfortunately, the Remote
Data Access layer was not raising an error or truncating the string in these
instances. As a result, attempting to insert these remote strings into a
local table would have generated a server assertion failure error. This problem
has now been fixed and either a truncation error will be raised if the string_rtruncation
option is on, or the Remote Data Access layer will silently truncate the
string to the maximum number of whole characters that will fit if the string_rtruncation
option off.
================(Build #3474 - Engineering Case #462634)================
Executing a CREATE VIEW statement could have failed with:
"*** ERROR *** Assertion failed: 102600 Error building EXISTS subquery
if complex constrains existed on the tables referenced in the view definition.
This has been fixed.
For example:
create table T1 ( a int )
create publication Pub1 ( table T1 (a) where not exists ( select 1 from
dummy ) )
The following statement would have failed:
create view V1 as select * from T1
================(Build #3474 - Engineering Case #462463)================
When executing a wide insert into a proxy table, the statement could have
failed to insert rows beyond the first row, returning a "-308 Connection
terminated" or "-85 Communications error" error code. This
has been fixed.
================(Build #3474 - Engineering Case #455002)================
Attempting to update a view that contained an outer join using a searched
update, could have failed if there were NULL-supplied rows, even if the update
statement only modified columns from the preserved table.
For example, the following sequence could have incorrectly lead to the error
CANNOT_UPDATE_NULL_ROW:
create view ab
as select apk, ax, bpk, bfk, bx
from a left outer join b on bfk = apk;
update ab set ax='Z' where apk=2
This has been fixed.
================(Build #3473 - Engineering Case #463662)================
In general, the optimizer enumerates all valid plans which compute the correct
result set for a given query. For each such plan, the optimizer estimates
an execution cost measured in microseconds. The plan with the smallest estimated
cost is then chosen as the execution plan for a query (aka the 'best plan'
for a query). If the database option Optimization_goal is set to FIRST-ROW,
the estimated cost for a plan enumerated by the optimizer is the estimated
time to compute the first row of the result set of the query. If the optimization
goal is set to ALL-ROWS, the estimated cost for a plan enumerated by the
optimizer is the estimated time to compute all the rows in the result set
of the query. The optimizer was not enumerating plans with complete index
scans on the right hand side of a MERGE JOIN. This has now been fixed.
================(Build #3473 - Engineering Case #463346)================
Under rare circumstances the server can crash while processing a query. This
has been fixed.
================(Build #3473 - Engineering Case #463182)================
If an UPDATE or DELETE statement had a sufficiently simple structure, contained
a WHERE clause identifying a single row by primary key, and had additional
predicates comparing a string or numeric/decimal column to a value that was
NULL, then the server could have crashed. This has been fixed.
================(Build #3473 - Engineering Case #463170)================
If a query contained a left outer join with an index scan on the left hand
side, then the query could have returned incorrect results on a refetch operation.
This problem only occurred on refetch operations (e.g. fetch relative 0),
and only when the data from the current row on the left hand side of the
join had been modified or deleted since the original access.
For example:
create table R ( x integer primary key );
insert into R values (1);
insert into R values (2);
insert into R values (3);
create table T ( x integer primary key );
insert into T values (1);
insert into T values (3);
Connection A:
open crsr { select R.x, T.x from R ( with index( R ) ) left join T on R.x
- T.x = 0 }
fetch crsr into x1, x2
=> returns (x1, x2) = (1, 1)
Connection B:
update R set x = 99 where x = 1
Connection A:
fetch crsr relative 0 into x1, x2
=> returns (x1, x2) = (3, 3), should be (2, NULL)
This problem has been fixed.
================(Build #3473 - Engineering Case #462017)================
After dropping the primary key from a table, the optimizer could have failed
to recognize that values in the column were no longer necessarily unique.
While rare, this could have lead to a poor execution plan. This has been
fixed.
================(Build #3473 - Engineering Case #455170)================
If a Windows NT system was shutdown or rebooted while a database server was
running as a service with desktop interaction enabled, the database server
could have crashed during the shutdown. This problem has been fixed.
================(Build #3472 - Engineering Case #463091)================
If an index was created on a column of type VARBIT or LONG VARBIT, then the
index would have incorrectly ordered some values.
For example, if x1 and x2 are two bit string values where x2 = x1 || repeat('0',N)
for some N > 0 (so, x2 = x1 followed by some number of 0 bits), the proper
order is therefore x1 < x2 because x2 is longer. If x2 has a multiple
of 8 bits and x1 does not, then an index will incorrectly order x2 < x1.
create table TVarBit( pk long varbit primary key, nbits int );
insert into TVarBit select repeat('0', row_num), row_num from sa_rowgenerator(0,8);
select nbits, pk from TVarBit with(index(TVarBit)) order by pk;
nbits,pk
0,''
8,'00000000'
1,'0'
2,'00'
3,'000'
4,'0000'
5,'00000'
6,'000000'
7,'0000000'
The value '00000000' is incorrectly sorted before '0' if the index is used.
If no index is used, the correct answer is given:
select nbits, pk from TVarBit with( no index ) order by pk;
nbits,pk
0,''
1,'0'
2,'00'
3,'000'
4,'0000'
5,'00000'
6,'000000'
7,'0000000'
8,'00000000'
This problem has been fixed. If a server with this fixed is used on a database
that contains an index on a VARBIT or LONG VARBIT column, then the index
will be reported as corrupt by VALIDATE TABLE, VALIDATE INDEX, sa_validate()
or dbvalid. All such indexes should be rebuilt using the ALTER INDEX ...
REBUILD statement. Any primary key indexes that contain VARBIT/LONG VARBIT
should be rebuilt first, followed by other index types. Otherwise attempts
to rebuild foreign keys could fail. For some structures of referential integrity,
it may not be possible to rebuild the indexes in place and it may be necessary
to drop and re-create them.
================(Build #3472 - Engineering Case #461052)================
When comparing a VARBIT or LONG VARBIT value to a value of another domain,
both arguments were converted to NUMERIC and then compared.
For example:
select row_num, cast( cast( row_num as tinyint ) as varbit ) vb
from sa_rowgenerator(0,10)
where vb >= '00000100'
The above query would have returned the empty set; vb would have been converted
to NUMERIC and compared to CAST('00000100' AS NUMERIC)==100. This has been
fixed. Now, both arguments are converted to LONG VARBIT and compared as bit
strings. The above query will now return the following:
row_num,vb
4,'00000100'
5,'00000101'
6,'00000110'
7,'00000111'
8,'00001000'
9,'00001001'
10,'00001010'
Further, when bit string values were displayed in plan text, they were displayed
as a hexadecimal string that did not have a clear relationship to the bitstring
value. For example, the bit string CAST( '00000100' AS VARBIT ) was previously
displayed as 0x080000000100000004000000 in the plan text. Now, bit string
values are displayed in the plan text as a string literal prefixed with B
(for example, B'00001010'). The B prefix distinguishes bit strings in the
plan text from character strings.
================(Build #3470 - Engineering Case #462595)================
If a TDS connection had already reached the maximum number of prepared statements
allowed, as defined by the Max_statement_count option, and then received
a language request, the server may have crashed instead of failing the request
with SQLCODE -685 "Resource governor exceeded". This has been fixed.
================(Build #3470 - Engineering Case #462447)================
If an HTTP request was made to the server, and then subsequently the Java
VM was loaded, then it was very likely that the server would have hung on
shut down. This problem has been fixed.
================(Build #3470 - Engineering Case #462332)================
It was possible to create a foreign key between two global temporary tables,
where one was shared and the other was not. This is no longer allowed. A
foreign key can be created between global temporary tables now only if both
are shared, or neither are shared.
================(Build #3470 - Engineering Case #462307)================
The setting for the option Sort_collation was not used when ordering NCHAR,
NVARCHAR or LONG NVARCHAR values. This has been fixed.
Note, if a materialized view was created with NCHAR values, this change
may alter the results of the materialized view. Such views should be recomputed
after this change is applied.
================(Build #3470 - Engineering Case #462299)================
If client statement caching was disabled for a connection by setting the
option Max_client_statements_cached to 0, then prepared statements on this
connection could still have used resources after the application had dropped
them. Statements would have been correctly dropped and re-prepared if the
application prepared the same statement again. This has been fixed so that
if client statement caching is disabled, the resources for a statement will
now be released when the application drops the statement.
================(Build #3470 - Engineering Case #462181)================
The Unload utility (dbunload) required table names used in the -t and -e
options to be case sensitive in case sensitive databases. This has been corrected
so that the table names are now case insensitive.
================(Build #3470 - Engineering Case #462029)================
The United State has extended Daylight Saving Time (DST) by 4 weeks in the
U.S. time zones that recognize DST. Starting in 2007, Daylight Saving Time
will begin the second Sunday in March (2am, March 11, 3 weeks earlier than
previous years) and will end the first Sunday in November (November 14, 2am,
1 week later than previous years). SQL Anywhere software is not directly
vulnerable to issues related to this change. SQL Anywhere 10.0.1 includes
Java Runtime Environment 1.5.0_10 (5.0_u10), which includes changes to resolve
issues associated with the new DST rules.
For more details, see:
http://www.ianywhere.com/developer/technotes/sa_daylight_savings_time_change.html
================(Build #3470 - Engineering Case #461407)================
Under some specific conditions, if a trigger used row variables that were
defined to be NUMERIC, DECIMAL or a string type, then the server could have
crashed, or reported an assertion failure. This has been fixed.
================(Build #3470 - Engineering Case #461154)================
The server could have crashed when under heavy load, and issuing (and possibly
cancelling) a large number of external calls. These could have been external
function calls, HTTP client procedures, RPC calls to remote servers, or Java
requests. A race condition in the server has been corrected.
================(Build #3470 - Engineering Case #461056)================
If an assignment statement contained a concatenation operator, it was possible
for the assignment to modify the value of an unrelated variable.
For example:
begin
declare @x long varchar;
declare @y long varchar;
set @x = 'abc';
set @y = @x;
set @y = @y || 'z';
end
It was possible for the variable @x to be incorrectly modified to be 'abcz'.
In order for this problem to have occurred, the concatenation must have been
of the form:
set @var = @var || 'string';
for some variable @var, and further, the variable @var must have shared
the same string value as another variable. In cases where the problem may
have occurred, it would occur intermittently depending on the current server
state. It has now been fixed.
================(Build #3470 - Engineering Case #460873)================
The server could have crashed when stopping a database. This problem would
only have happened when a database was shutdown immediately after a transaction
had ended, and then only rarely. This has been fixed.
================(Build #3470 - Engineering Case #456312)================
If a filename provided to a BACKUP, RESTORE, LOAD TABLE, UNLOAD TABLE, or
UNLOAD statement was not a CHAR-based character string (CHAR, VARCHAR, LONG
VARCHAR), then the filename could have been incorrectly translated to an
operating system file name.
An error is now given if a filename is provided to one of the above statements
and the filename is not a CHAR-based string. This can happen either if the
N'' string literal prefix is used to denote an NCHAR string or if a variable
is used and the variable is not CHAR-based.
================(Build #3470 - Engineering Case #456302)================
The server may have crashed if multiple connections ran the same stored procedure
concurrently the first time after the database was started, or an event caused
the procedure to be reloaded. For the crash to have occurred, the procedure
must have contained a control statement (e.g. IF) with a condition that used
a subselect referencing proxy tables. This has been fixed.
================(Build #3470 - Engineering Case #456231)================
If a statement contained one or more large IN list predicates that were processed
using the IN List algortihm, then the statement could have taken a long time
to respond to a cancel request. In some cases, a communication error would
have occurred when executing such statements. This has been fixed.
For more information on the IN List algorithm, see the documentation:
SQL Anywhere® Server - SQL Usage
Query Optimization and Execution
Query execution algorithms
Miscellaneous algorithms
IN List
================(Build #3470 - Engineering Case #455776)================
When a cursor was opened on a query with a "Nested Loops Semijoin Algorithm"
or a "Nested Loops Anti-Semijoin Algorithm" in the execution plan,
the cursor could have been repositioned inappropriately on a FETCH RELATIVE
0. Instead of remaining on the current row, a different row or NOTFOUND could
have been returned. This has been fixed.
================(Build #3470 - Engineering Case #455148)================
Use of references to procedure in UPDATE statements could have caused the
server to crash. The server will now generate an appropriate error message.
================(Build #3470 - Engineering Case #453412)================
An AFTER UPDATE OF column-list trigger may have fired even if no column in
the trigger's
column-list had been changed. This would have only have occurred if some
other column specified in the UPDATE's SET list changed its value.
For example:
create trigger T1_UA after update of C1 on T1
for each row
begin
message 'after update trigger on column C1 fired';
end;
insert into T1 ( C1, C2 ) values ( 'abc', 'xyz' );
The following UPDATE changes C2, but C1 does not change, so trigger T1_UA
should not fire
update T1 set C1= 'abc', C2 = 'XYZ';
This has been fixed so that triggers no longer fire under these conditions.
================(Build #3420 - Engineering Case #463733)================
The server may have hung on shutdown if it was running an HTTPS server. This
has been fixed.
================(Build #3419 - Engineering Case #465383)================
A web services procedure attempting to make a secure connection to a remote
web service may have failed with the error "Unable to connect to the
remote host specified by '<URL>'". This has been fixed.
================(Build #3418 - Engineering Case #465168)================
Pipelined HTTP requests may have prematurely exceeded the MaxRequestSize
protocol option limit, resulting in a 413 - 'Request Entity Too Large' HTTP
status. This has been fixed. Now, by default, the MaxRequestSize limit is
102,400 bytes per request.
================(Build #3416 - Engineering Case #463747)================
Pipelined HTTP requests may have caused the server to crash under certain
circumstances. This has been fixed.
================(Build #3416 - Engineering Case #462903)================
The server could have reported assertion failures, a fatal error or crashed,
when reading rows with certain characteristics. This would only have occurred
if the row contained NCHAR data, or a column in the table was defined with
CHARACTER-LENGTH SEMANTICS (such as a datatype of VARCHAR( N CHAR ) ). The
problem could have occurred during querying the data, DML, or DDL. The data
in the database file was correct. In order for the problem to occur, the
column must have been declared to be one of the following:
- CHAR( N CHAR ) or VARCHAR( N CHAR )
- NCHAR( N ) or NVARCHAR(N)
where N <= 127 and (N * (max-bytes-per-char) > 127). This problem
is now fixed and servers containing this fix will be able to properly read
rows containing the aforementioned datatypes.
================(Build #4210 - Engineering Case #666013)================
Using Sybase Central's searching capabilities to search for a single backslash
would have caused Sybase Central to crash. This has been fixed.
================(Build #4173 - Engineering Case #644464)================
If a database had two or more 'post_login_procedure' option settings, then
attempting to connect to the database would have failed with a "Subquery
cannot return more than one row" error. This has been fixed.
================(Build #4169 - Engineering Case #654434)================
When sorting the plug-in names in the Plug-in dialog, an exception would
have been thrown if a plug-in was first reloaded. This has been fixed.
================(Build #4163 - Engineering Case #652547)================
Altering the schedule for an event to remove a days-of-month specification
did not set SYSSCHEDULE.days_of_month to null. This has been fixed.
================(Build #4161 - Engineering Case #651520)================
Displaying the results of an Index Consultant run of a large workload could
have required an excessive amount of memory, causing Sybase Central to crash
if there was not enough physical memory available. This has been fixed.
================(Build #4159 - Engineering Case #651491)================
Typing Ctrl-Z in an unmodified editor would have marked it as modified. This
has been fixed.
================(Build #4135 - Engineering Case #645005)================
When using the Unload or Extract Database wizards, if unload/extract into
a new database was chosen, and Strong encryption was specified with an encryption
key that didn't match the confirm encryption key, then the wizards would
have continued to report a key mismatch error even after the encryption type
was changed to Simple (where the encryption key is not used). This has been
fixed.
================(Build #4127 - Engineering Case #642826)================
A NullPointerException could have occurred when the DetailsList lost focus.
This has been fixed.
================(Build #4092 - Engineering Case #634032)================
The 'Find/Replace' toolbar button did not work in an editor window if the
toolbar had been undocked. This has been fixed by preventing the toolbar
from being undocked.
A workaround is to use the menu item or F3 key to open the 'Find/Replace'
window.
================(Build #4091 - Engineering Case #633799)================
The "Find/Replace" window could have opened without any components
if it was opened from a window used to view a stored procedure or view. The
window typically contained only a grey or white rectangle. The problem happened
only when opening a file in the editor window, or when clicking "File/New".
This has now been fixed.
================(Build #4091 - Engineering Case #633784)================
The "Tools" button on the "Connect" window, and any toolbar
buttons in Sybase Central which have drop-down arrows, were drawn without
the usual button border and background gradient on Linux and Solaris computers.
This has now been fixed so that they have the correct background and border.
================(Build #4078 - Engineering Case #631122)================
After right-clicking a procedure or function and selecting "Execute
from Interactive SQL", the resulting dialog where the parameters are
specified had a row height for the table that was too small. This caused
text to be clipped in the "Value" column. This has been fixed.
================(Build #4057 - Engineering Case #625800)================
Registering the plug-in, but not loading it immediately, would have caused
an exception to occur. This has been fixed.
================(Build #4057 - Engineering Case #625759)================
When in the Procedure debugger, clicking on the header of the breakpoint
icon column caused an error. This has been fixed.
================(Build #4056 - Engineering Case #625515)================
Sybase Central did not allow stopping a Windows service in a "Delete
pending" state. This is now fixed. The Stop menu item is now enabled
for running services, regardless of
whether a delete is pending.
================(Build #4033 - Engineering Case #619388)================
When using Sybase Central's fast launcher, the timeout setting was ignored
and the launcher continued to run. This has been corrected.
================(Build #4029 - Engineering Case #619253)================
Executing any query in the Query Database dialog when debugging and at a
breakpoint, would have always displayed the error "Invalid plan.".
Showing a plan while debugging is not required so the Plan button has been
removed.
================(Build #4028 - Engineering Case #619039)================
The Execute Query menu item was enabled when in the debugger and not at a
breakpoint. This has been fixed.
================(Build #4026 - Engineering Case #618639)================
When viewing the Application Profiling "Details" pane for a large
trace, the pane could have taken a very long time to paint. This has been
fixed. Although the pane may still take several seconds to paint, it should
be two orders of magnitude faster.
Note, users who do not have this fix can still view the underlying data
by manually by querying the sa_diagnostic_* tables as described in the documentation.
================(Build #4025 - Engineering Case #618244)================
In the Options dialog, when disabling or enabling the fastlauncher checkbox,
the "Configure..." button was not disabled or enabled as appropriate
as well. This has been fixed
================(Build #3988 - Engineering Case #606841)================
When using the Table wizard to create a new table, if the F5 key was pressed
before saving the table, and answering 'No' to the "Do you want to save
changes" dialog, would either have caused Sybase Central to crash or
have caused the dialog to have been displayed repeatedly. This has been fixed.
================(Build #3966 - Engineering Case #591832)================
When using the debugger in the Sybase Central, a NullPointerException could
have been thrown when leaving debugging mode. This would likely have occurred
infrequently, but has now been fixed.
================(Build #3912 - Engineering Case #577938)================
Sybase Central could have occasionally reported an internal error on startup,
when the Fast Launcher option was turned on. This has been fixed.
================(Build #3897 - Engineering Case #574022)================
When editing a table's schema using the table editor, the Cut, Paste, Delete
and Undo toolbar button enabled states would not have been updated if the
Ctrl key accelerators were used to perform these operations. This has been
fixed.
================(Build #3884 - Engineering Case #569797)================
When unloading or extracting a database into a new database that was created
with strong encryption, the Unload and Extract Database wizards would have
display the encryption key in plain text in the Unload or Extract Database
Messages Dialog. This has been corrected so that now the encryption key is
displayed as "***".
================(Build #3884 - Engineering Case #569124)================
When a Synchronization Model was modified to include either handle_DownloadData
or handle_UploadData events, the model file could not have then been re-opened.
An error message would have been displayed stating "unknown or missing
event ID handle_DownloadData" or "unknown or missing event ID handle_UploadData".
This has been corrected.
================(Build #3883 - Engineering Case #568426)================
When browsing permissions for tables and views, a user might have been listed
more than once on the Permissions page of the property sheet for a table
or view. Similarly, a table or view might have been listed more than once
on the Permissions page of a user's property sheet. When fetching permissions,
rows with the same grantee but different grantor, were not properly grouped.
This has been fixed.
================(Build #3837 - Engineering Case #556447)================
When editing a DATE value in the "Results" pane of the Interactive
SQL utility (dbisql), or on the "Data" tab in the SQL Anywhere
plug-in for Sybase Central, if the date was typed in, rather than using the
date chooser dialog, the value entered was ignored when the value was updated.
This has been corrected so that the value entered is now sent to the database.
================(Build #3819 - Engineering Case #553759)================
The "Replace All" action in the syntax highlighting editor could
have corrupted the text if the action was restricted to the selected text.
This has been fixed.
================(Build #3789 - Engineering Case #492814)================
A user without administrator permission was unable to start, stop or delete
SQL Anywhere services on a Windows 2003 machine, even if that user had been
granted permission by an administrator to control those services. This has
been fixed.
================(Build #3774 - Engineering Case #545646)================
Creating a new connection profile by copying an existing profile, could have
resulted in the copy having the wrong plugin type. This has been fixed.
================(Build #3744 - Engineering Case #540725)================
The Migrate Database wizard could have failed to display messages in the
messages dialog if it was connected to two or more databases running on the
same server. The wizard was listening for asynchronous messages on the wrong
connection. This has been fixed.
================(Build #3731 - Engineering Case #537558)================
A side effect of the combined fixes for Engineering cases 533936 and 536335
was to cause Sybase Central to crash when attempting to expand the table
tree view in the MobiLink plugin. This problem has now been fixed.
================(Build #3721 - Engineering Case #535817)================
Sybase Central was not able to attach to a tracing db when the server was
set to ignore all broadcasts (-sb 0), and no PORT number was specified.
This has been fixed.
Note, this is a follow-up to Engineering case 530790.
================(Build #3717 - Engineering Case #534292)================
When run on non-Windows platforms, the Server Message Store wizard would
have crashed after leaving the second page. This has been fixed.
In related issues, the toolbar button for opening this wizard was missing,
and the "Create a client message store" item should have been removed
from the Task list on non-Windows machines, but was not. These have been
fixed as well.
================(Build #3709 - Engineering Case #532807)================
The Server Message Store wizard could have failed to complete with the message
"Could not install QAnywhere support. Request to start/stop database
denied". This would have occurred if a server was already running and
it was not started with the "-gd DBA" or "-gd all" command
line options, and "Create an ODBC data source" was checked in the
wizard. This has now been fixed.
================(Build #3709 - Engineering Case #530790)================
When running the Application Wizard, Sybase Central would have failed to
connect to the tracing database if the server had been started with broadcast
ignored (-sb 0). This has been corrected. Sybase Central now will include
the machine name and port in the connection string when attaching to the
tracing database.
================(Build #3699 - Engineering Case #530587)================
When viewing the server messages from the SQL Anywhere Console utility (dbconsole0
or from the Sybase Central, there was a possibility that messages could have
been duplicated or lost. This has been fixed.
================(Build #3696 - Engineering Case #529855)================
Sybase Central would have crashed with a NullPointerException if a message's
property sheet was opened and the message had properties with null values.
This has been fixed.
================(Build #3688 - Engineering Case #500505)================
After creating a new tracing database using the Tracing Wizard, the Create
Database button remained enabled. This has been corrected so the the button
is now disabled after the database is created successfully, and re-enabled
only if database file name, user name, or password fields are changed.
================(Build #3688 - Engineering Case #500354)================
When running with a screen resolution of 800x600, the Tracing Wizard would
not have fit onto the screen. This has been fixed.
================(Build #3685 - Engineering Case #499313)================
It would not have been possible to connect to a "Generic ODBC"
database if the password contained any of the following characters: "[]{}(),:?=!@".
This has been corrected for all but the equals sign "=", the rest
are now allowed.
================(Build #3683 - Engineering Case #426806)================
The Index Consultant wizard would have returned syntax errors, and could
not work properly, if the statements to be analysed contained column names
that were only valid as quoted identifiers. This has been fixed.
================(Build #3674 - Engineering Case #496937)================
When run on Mac OS X systems, an IndexOutOfBoundsException could have been
thrown when navigating the object tree. This has been fixed.
================(Build #3657 - Engineering Case #493578)================
Attempting to add a row to a table which contained a UNIQUEIDENTIFIER column
using the table in the "Results" panel could have caused Sybase
Central to crash. This problem affected the Interactive SQL utility dbisql
as well. It has now been fixed.
Note that the problem was restricted to UNIQUEIDENTIFIER columns which could
not be null and which did not have default values.
================(Build #3651 - Engineering Case #485262)================
If the server crashed during application profiling and was then restarted,
Sybase Central would have crashed on an attempt to connect to the restarted
database. This has been fixed.
================(Build #3640 - Engineering Case #491111)================
Scrolling the horizontal scrollbar to the right and then refreshing the list
did not update the column headers. This has been fixed.
================(Build #3623 - Engineering Case #483635)================
If a trigger was created using the Interactive SQL utility with the syntax:
CREATE TRIGGER owner.name ...
then Sybase Central would have displayed the trigger SQL with the trigger's
owner replaced by the trigger's name, such as:
ALTER TRIGGER name.name ...
This has been fixed. Now the SQL is displayed correctly as:
ALTER TRIGGER name ...
That is, the owner (which is syntactically valid, but is ignored by the
engine)
is removed from the source.
================(Build #3616 - Engineering Case #485629)================
Application Profiling did not provide trigger profiling information. This
has been fixed.
================(Build #3605 - Engineering Case #485725)================
If a plug-in was registered using a JPR file that specified a directory in
a different case than another registered plug-in that shares the same classloader,
a warning was issued about the JAR file being different. This has been fixed.
================(Build #3599 - Engineering Case #484966)================
After connecting to a database containing a unique foreign key, selecting
the Indexes folder in the tree would have displayed the error "The values
<xxx> cannot fit into a byte" where <xxx> = 129 or 130.
This has been fixed.
================(Build #3597 - Engineering Case #467733)================
When editing a stored procedure which had more than about 1000 lines, CPU
usage could have gone to 100%, even without doing anything. This same problem
affected the Interactive SQL utility dbisql as well, when editing large SQL
files. The problem was caused by the amount of tokenizing being done for
syntax highlighting. This has been fixed.
================(Build #3539 - Engineering Case #476533)================
The debugger state was not being updated correctly when debugging a connection
currently at a breakpoint and then a running connection was chosen from the
connections list. This has been fixed.
================(Build #3521 - Engineering Case #473831)================
A watch for the variable SQLCODE did not evaluate correctly. This has been
fixed.
================(Build #3502 - Engineering Case #464474)================
On Mac OS X systems only, items in the menu bar for Sybase Central were enabled
even when a modal dialog was open. If one of these menu items was clicked
while the dialog was open, the program could have crashed. This has been
fixed.
Note, this problem affected the Interactive SQL utility, DBConsole, and
the MobiLink Monitor as well, which have also been fixed.
================(Build #3499 - Engineering Case #470056)================
Attempting to include a column with an approximate data type (real, float
or double) in an index would have caused a warning dialog to be displayed
discouraging this practice. The dialog was provided with an OK button only.
This has been corrected so that now both OK and Cancel buttons are provided,
with Cancel cancelling the operation.
================(Build #3492 - Engineering Case #468783)================
Clicking the Name column heading in the table editor, would not have sorted
the column names correctly. Specifically, all column names starting with
upper-case letters were sorted before all column names starting with lower-case
letters. This has been fixed. A similar fix has been made for the Data Type,
Value and Comment columns.
================(Build #3479 - Engineering Case #465964)================
If a web service had a SQL statement associated with it, the web service
property sheet was used to change the web service's type to 'DISH', then
the SQL statement would have been deleted without warning. Now a warning
message appears when 'DISH' is selected in the "Service type:"
drop down list, with an opportunity to select Cancel without deleting the
SQL statement.
================(Build #3478 - Engineering Case #465733)================
Attempting to create a a tracing database would have caused Sybase Central
to crash. This has been fixed.
================(Build #3474 - Engineering Case #464155)================
Attempting to use the Create Procedure wizard to create a Transact-SQL procedure,
would have caused Sybase Central to crash. This has been fixed.
================(Build #3474 - Engineering Case #463915)================
Sybase Central did not distinguish between CHAR or VARCHAR columns with byte-length
or character-length semantics, nor did it allow for the creation of CHAR
or VARCHAR columns with character-length semantics. Both of these problems
have been fixed. Now, CHAR and VARCHAR columns with character-length semantics
are displayed as "char(nn char)" and "varchar(nn char)"
respectively.
In addition, when editing a column in the table editor or Column property
sheet, or creating a domain or function in the corresponding wizards, it
was possible to specify a size value that would exceed the database maximum
(32767/maximum_character_length for CHAR and VARCHAR; 8191 for NCHAR and
NVARCHAR). This problem has also been fixed.
================(Build #3474 - Engineering Case #463613)================
Opening online help in Sybase Central may not have worked the first time
it was asked for, although subsequent requests would have succeeded. The
problem was timing and operating system dependent. It has been fixed.
================(Build #3474 - Engineering Case #463598)================
The Create Database wizard would have chosen the wrong defaults for the case
and accent sensitivity collation tailoring options for UCA collations if
the database was being created on a Japanese machine, the server's character
set was a Japanese character set, or a Japanese collation was selected for
the new database's CHAR collation. This has been fixed.
================(Build #3474 - Engineering Case #463589)================
Clicking the "Help" button on the "Add Watch" dialog
would not have done anything. Now it correctly opens the online help for
the dialog.
================(Build #3470 - Engineering Case #462673)================
The Unload Database and Extract Database wizards did not ensure that a new
database file name was specified when unloading or extracting into a new
database. Failure to specify a new database file name would have caused the
unload or extract operation to fail with the error: "Specified database
is invalid". This has been fixed.
================(Build #4209 - Engineering Case #663937)================
In the Interactive SQL utility, setting the "on_error" option to
"continue" was not preventing warnings from being displayed in
a popup window. This has been corrected so that when the option is set to
"continue", warnings are now displayed in the "Messages"
pane.
================(Build #4170 - Engineering Case #654981)================
The Console utility could have stopped refreshing connection properties after
changing the set of properties which were displayed, even after restarting
DBConsole. This has been fixed.
================(Build #4169 - Engineering Case #654253)================
In versions of the Interactive SQL utility 10.0.0 and later, using an ISQL
parameter for the value of an option in a SET OPTION statement did not work
because the parameter was not substituted correctly. This has been fixed.
================(Build #4151 - Engineering Case #649468)================
When a SQL error occurred that was not in the last SQL statement entered,
an error dialog was displayed that had a 'Continue' and a 'Stop' button.
Hitting the escape key had no effect. Thisn has been corrected so that hitting
the escape key on this dialog now selects the 'Stop' action.
For example, open the Interactve SQL utility (dbisql), connect to database
and enter the following:
select * from foo;
select 'hello'
Since there is no table foo, the first statement generates an error dialog
with a 'Stop' and 'Continue' button. Hitting escape here is now the same
as pressing the 'Stop' button.
================(Build #4139 - Engineering Case #645986)================
If the database server reported a "definite" rowcount for a query
but then returned fewer than that many rows, dbisqlc could have displayed
a subset of the rows, or possibly no rows at all. This has now been fixed.
When using the "Connect with an ODBC Data Source" option in the
"Action" dropdown list of the "Login" tab of the connection
dialog with the 64-bit versions of the dbisqlc utility and dbmlsync, no
DSNs would have been displayed; and the 32-bit versions would only have shown
DSNs that used an ODBC driver with the same major version of SQLAnywhere
as dbisqlc and dbmlsync. This has been fixed, so that 32-bit and 64-bit versions
now display all SQLAnywhere DSNs defined for SQLAnywhere version 6.0 and
up.
Dbisqlc did not correctly handle certain connection string components which
did not have a shortform (such as the new "Server" parameter).
This problem has been fixed.
================(Build #4135 - Engineering Case #644855)================
If the Interactive SQL utility (dbisql) was open, and the results of a query
were displayed, changing the Windows desktop theme from "Windows XP"
to "Windows Classic", or otherwise changing the Window style from
"XP" to "Classic", would have caused dbisql to crash.
This has been fixed.
This issue could have manifest itself any time the Windows look-and-feel
was changed from something other than Windows Classic to Windows Classic.
It would also have affected Sybase Central if a table was selected in the
"Folders" panel.
This issue only occurred on Windows computers. Operation on other operating
systems was not affected.
================(Build #4123 - Engineering Case #641131)================
When connected to a database using an ODBC data source which used the ASA
9.0 ODBC driver, the Interactive SQL utility could have crashed if an INPUT
statement was executed which processed TIME, DATE, or TIMESTAMP data. This
has been fixed.
================(Build #4123 - Engineering Case #637447)================
The ALT left cursor and ALT right cursor keys on Solaris systems control
the desktop and cannot be used to view executed SQL statements. The get around
this limitation, the keys used now for viewing previous and next SQL statements
on Solaris systems are the keys CTRL-up cursor and CTRL-down cursor respectively.
================(Build #4111 - Engineering Case #639018)================
The Query Editor could have crashed when opened if the initial SELECT statement
contained "*" and at least one explicit column name in the column
list. This has been fixed.
================(Build #4111 - Engineering Case #637203)================
The column alignment used by the OUTPUT statement for the FIXED file format
was often inappropriate for the data type. In general, numbers should be
right-aligned, while everything else is left-aligned. This has been fixed.
================(Build #411 - Engineering Case #638986)================
The Query Editor could have crashed if the last time it was opened a join
had been added to the query, but the join type was left unspecified (blank).
This has been fixed.
================(Build #4109 - Engineering Case #637174)================
If a file was inserted using the "Edit/Insert File" menu item,
the file would have been locked by dbisql until dbisql was closed. If the
Fast Launcher option was turned on, the file was locked until the Fast Launcher
also shut down (which by default happens after 30 minutes of inactivity).
Now, the file is unlocked as soon as its text has been added to the "SQL
Statements" field.
================(Build #4105 - Engineering Case #634445)================
As a side-effect of the changes made for Engineering case 627780, the Start
Server in Background utility (dbspawn) on Windows no longer allowed the database
server start line to be passed as a single quoted string. This behaviour
has now been restored to its previous state on Windows only, and only for
versions 10.0.1 and 11.0.1. Version 12 will retain
the new behaviour as per the documentation.
================(Build #4100 - Engineering Case #635443)================
When not connected to a database in the SQL Anywhere Console utility (dbconsole),
the "File/Options" menu is disabled. The "Options" context
menus for the "Connections", "Properties", and "Messages"
panels were not disabled. This has been corrected so that now they are disabled.
================(Build #4096 - Engineering Case #634503)================
When completing the name of a column in a SELECT statement following the
FROM clause, the completed text could have included the owner name for the
table which contained the column, but not the table name. This was not valid
SQL. Now, the owner name is not part of the inserted text.
================(Build #4091 - Engineering Case #633775)================
Exporting source control commands from the "Custom Source Control Options"
window could have caused the Interactive SQL utility to crash on Mac OS X
systems. This has been fixed.
================(Build #4091 - Engineering Case #633744)================
On Mac OS X systems, the name of a saved history file was given the extension
"..sq", rather than ".sql", when an explicit file extension
was not entered. This has been fixed.
================(Build #4090 - Engineering Case #633610)================
The "Find/Replace" dialog could have failed to find text when the
"Look in selection" box was checked. This was most likely to happen
if the selection started far from the start of the text, and ended close
to the end of the text. This has been fixed.
================(Build #4086 - Engineering Case #632743)================
The text fields on the second page of the Index Consultant were too narrow
to display even their default values. This problem was most apparent on
Mac OS X systems, although it could also have occurred on any platform if
the font used by the application was sufficiently large, or if the look-and-feel
for that platform had a wide border for text fields. This has been fixed.
================(Build #4085 - Engineering Case #632545)================
Pressing a non-character key (e.g LeftArrow, Alt, Space, etc.) could have
inserted a hollow box character to the SQL Statements field. This has been
fixed.
This problem was more readily seen on Linux systems than on Windows.
================(Build #4083 - Engineering Case #632177)================
The text completer would not have suggested any names following a string
of the form "owner.partialTableName" if "owner" was the
same as "partialTableName". This would have occurred when trying
to complete the name of a system table, e.g.:
SELECT * FROM sys.sys
This problem would have affected text completion in SELECT, DELETE, and
UPDATE statements and has now been fixed.
================(Build #4081 - Engineering Case #631781)================
The text completer could have failed to suggest column names at the end of
a dotted expression if the SQL statement spanned more than one line. For
example, if the completer was opened at the end of the following statement:
SELECT * FROM customers C
WHERE C.
it should have suggested the columns in the "Customers" table,
but it did not. This has been fix so that now it does.
================(Build #4078 - Engineering Case #591837)================
The Index Consultant in the Interactive SQL utility would have failed to
process queries containing line-terminated comments (ie -- or //). This
has been fixed.
As a work around, removing the comments allowed the analysis to proceed.
================(Build #4077 - Engineering Case #630522)================
The Interactive SQL utility (dbisql) could have crashed if more than one
dbisql window was opened by clicking the "Window/New Window" menu
item, the first window was close, then the "Preferences" item in
the "Interactive SQL" menu was clicked. This bug also had the
symptom of always showing the preferences for the first window, never for
any of the subsequently opened windows. This has been fixed.
================(Build #4073 - Engineering Case #629458)================
Clicking the Close button in the title bar of the Query Editor was considered
equivalent to clicking the OK button. This was incorrect; it should have
been equivalent to clicking the Cancel button. This has been fixed.
================(Build #4067 - Engineering Case #628253)================
The Interactive SQL utility option isql_print_result_set was being ignored.
This has been corrcted so that it is once again respected.
================(Build #4060 - Engineering Case #626474)================
The SQL cited in the "ISQL Error" window did not display blank
lines. As a result, the line number in the database error message might not
have corresponded to the displayed SQL if the statement contained blank lines.
This has been fixed.
Also, the line and column shown in the status bar of the main DBISQL window
is no longer updated if the caret (insertion point) is moved in the text
field that shows the SQL in the "ISQL Error" window.
================(Build #4060 - Engineering Case #623276)================
On Windows systems, a reload of a pre-version 10 database file could have
hung. Unix
systems were not affected. This has been fixed.
================(Build #4057 - Engineering Case #625641)================
The Interactive SQL utility could have reported an error on some Windows
computers that its preferences or history file could not be saved. The error
message quoted a file name which typically included a directory under the
"Documents and Settings" directory which was not the home directory
of the current user. This has been fixed.
Note, this same problem has the potential to affect Sybase Central, DBConsole,
and MobiLink Monitor, though their symptoms are likely to be different.
================(Build #4056 - Engineering Case #625329)================
The Service utility was not reporting the warning "The specified service
is running. The service will be deleted when it is stopped." when deleting
a running service. It was failing to detect the running state of the service
during the delete and failing to report the warning as expected. This has
been fixed.
================(Build #4055 - Engineering Case #625325)================
When using the Service utility to delete a service that was running, the
warning "The specified service is running. The service will be deleted
when is is stopped." was reported. The warning should read "The
specified service is running. The service will be deleted when it is stopped."
The wording has now been corrected.
================(Build #4055 - Engineering Case #624986)================
If the Interactive SQL fast launcher was enabled, and there was enough connection
information on startup to attempt a connection, the main dbisql window could
have, very occasionally, got into a state where it did not paint correctly.
This has been fixed.
There are a number of workarounds:
- Minimize and then restore the dbisql window, or
- Resize the window, or
- Turn off the fast launcher for dbisql
================(Build #4017 - Engineering Case #615655)================
For some types of page corruption, the Validate utility (dbvalid) could have
reported incorrect page numbers. This has now been corrected.
================(Build #4017 - Engineering Case #613984)================
Rebuilding version 9 or earlier databases using the Unload utility (dbunload)
could have failed with the error "Unable to start specified database:
autostarting database failed" if the old database had been run with
database auditing. This has been fixed.
================(Build #4010 - Engineering Case #613261)================
Executing a query like the following from SQL Server Management Studio (2005):
select * from saoledblink..SYS.syscollation;
would have failed with the error:
Msg 7356, Level 16, State 1, Line 1
The OLE DB provider "SAOLEDB.11" for linked server "saoledblink"
supplied nconsistent metadata for a column. The column "collation_order"
(compile-time ordinal 4) of object ""SYS"."syscollation""
was reported to have a "DBCOLUMNFLAGS_ISFIXEDLENGTH" of 0 at compile
time and 16 at run time.
The problem occurred for BINARY(n) columns. The DBCOLUMNFLAGS_ISFIXEDLENGTH
schema attribute was set true (0x10) at run time. This problem has been fixed.
================(Build #4005 - Engineering Case #612641)================
The Deployment wizard would have failed to work properly on Windows Vista
and Windows 7. The wizard would have claimed to complete successfully, but
the resulting MSI was invalid. The wizard was attempting to create a temporary
file in the Program Files directory, which is disallowed by Windows Vista
and Windows 7. This has now been corrected.
A workaround for this issue is to run the deployment wizard as an administrator.
================(Build #4005 - Engineering Case #609454)================
When trying to bind null values for string or blob columns, the SQL Anywhere
C API would have crashed in the call to sqlany_execute(). This has been fixed.
Also, when binding null values, dbcapi required a valid type to be specified.
This is no longer required.
================(Build #3999 - Engineering Case #610723)================
If the Data Source utility (dbdsn) was used to create an ODBC data source,
but the -c
option was not specified, a data source would have been created containing
"LINKS=ShMem,TCPIP". This has been fixed, the -c option is now
required when -w is used.
================(Build #3990 - Engineering Case #606465)================
If the "kerberos" connection parameter, or its short form "krb",
was given on the command line, the Interactive SQL utility would not have
connected to the database unless a userid was also given. This has been fixed.
================(Build #3987 - Engineering Case #606440)================
If a DSN was created using the Data Source utility (dbdsn), attempting to
modified the userid or password of the DSN using the ODBC Administrator would
have reported no errors, but it would have failed to change either of these
fields. This has been fixed.
================(Build #3951 - Engineering Case #587256)================
If a database with a foreign key declared as "NOT NULL" was unloaded
or rebuilt, the resulting reload.sql or database contained the foreign key
without "NOT NULL" declared. This has been fixed.
================(Build #3931 - Engineering Case #560069)================
When executing a VALIDATE statement, or running the Validation utility (dbvalid),
table validation would have failed to report errors when an index did not
contain all the rows in the table. This has now been corrected.
Note, when validating a database with a 10.0.1 server between 3920 and 3930
inclusive, it was also possible for errors to be reported, when in fact there
were no error. I this case, the 10.0.1 server should be updated to a build
number of 3931 or higher, and the validation rerun to see if the errors are
still reported.
================(Build #3909 - Engineering Case #574920)================
A client side backup of a database with a path and filename length greater
than 69 bytes in the client character set, could have failed or truncated
the filename. This has been fixed.
================(Build #3906 - Engineering Case #576016)================
Use of the "-f" command line option did not behave consistently
when the fast launcher option was on. It would work correctly the first time,
but subsequently running "dbisql -f filename" with a different
file, would have opened the first file again. This has been fixed.
================(Build #3900 - Engineering Case #557829)================
If the MobiLink Listener (dblsn) was started with the -q option ("run
in minimized window"), and it was then restored by double clicking on
the icon from the today screen, the "shutdown" button did not appear.
This has been fixed.
================(Build #3898 - Engineering Case #574314)================
Attempting to modify a service that was already running using the Service
utility (dbsvc) with the -w "create service" command line option
would have failed. The utility would have deleted the server, but would not
have been able to re-create it. This has been fixed. If the service is running,
dbsvc will now report an error and will not delete the service.
================(Build #3879 - Engineering Case #568436)================
When running on a Windows machine configured to use the 1252 code page, if
the Interactive SQL utility (dbisql) attempted to open a file which contained
a Euro sign (€), it would have asked for which code page to use to interpret
the file. Now, dbisql recognizes that the Euro sign is part of the Windows
1252 code page, and reads the file without prompting. This change also fixes
similar behavior when a file contains any of the following characters:
€ U+20AC Euro sign
Ž U+017D Latin capital letter Z with caron
ž U+017E Latin small letter Z with caron
================(Build #3874 - Engineering Case #567017)================
When using the SQL Anywhere Support utility to check for updates (dbsupport
-iu), it may have returned "Error checking for updates. Please try again
later." Subsequent retries by the same dbsupport instance would also
have failed. A counter variable was not being reset. This has now been fixed.
================(Build #3862 - Engineering Case #564472)================
The Interactive SQL utility (dbisql) could have reported an internal error
if, when connected to an UltraLite database, the text completer was opened
in a SELECT statement and the caret was immediately after a period. This
has been fixed.
================(Build #3855 - Engineering Case #562605)================
If more than one of the dblocate filtering options (-p, -s, -ss) was used
and a hostname or IP address was specified, only one was used. There was
an implicit ordering and only the first of the ordering that was specified
would have been used; the second and subsequent options would have been ignored.
The ordering was:
hostname/IP address specified on command line
-p
-s
-ss
This has been fixed. If more than one of these options is specified, they
are all now applied.
================(Build #3837 - Engineering Case #557507)================
If the statement:
set option public.login_mode='Standard,Integrated'
was executed, it would have been recorded in the transaction log as
set option public.login_mode='Standard'
This could have affected mirroring environments where the login_mode was
set correctly on the primary server, but not on the mirror. This has been
fixed.
================(Build #3834 - Engineering Case #556121)================
Changes made as part of the fix for Engineering case 554242, introduced a
problem where running the Validation utility (dbvalid) with a user who did
not have DBA authority, or execute permission on the dbo.sa_materialized_view_info
procedure, would have failed
with the error message:
Permission denied: you do not have permission to execute the procedure
"sa_materialized_view_info"
This has been fixed.
================(Build #3831 - Engineering Case #555208)================
Interactive SQL utility options, which were set from a .SQL file while it
was running as a console application, would not have been saved. This has
been corrected so that now they are.
================(Build #3829 - Engineering Case #553719)================
The Unload utility (dbunload) was not able to rebuild a pre-Version 10 database
if the variable SATMP was unset, but variable ASTMP was set. Dbunload would
have returned a connection error in this case. This has been fixed.
Note, as workaround the SATMP variable can be set.
================(Build #3821 - Engineering Case #554242)================
The Validation utility (dbvalid) was not validating materialized views by
default. The utility was generating a list of tables to validate that only
included base tables. This has been corrected incorporate initialized materialized
views in either the fresh or stale state.
================(Build #3819 - Engineering Case #553911)================
The Service utility (dbsvc) included with versions 10.0.0 and up, did not
recognize services created with pre-10.0 versions of dbsvc. This has been
fixed. Old services can now be viewed, deleted, started and stopped. If an
old service is created using the patched dbsvc, the new service will no longer
be visible by pre-10.0 dbsvc.
================(Build #3818 - Engineering Case #552779)================
When executing an "OUTPUT" statement, dbisqlc would have executed
the current query two extra times. Some output file formats require a particular
date format to be used and in version 10.0.0 and later, changing the date
format option on a connection does not affect cursors that are already open.
To work around this change of behaviour, dbisqlc closed the current query,
changed the DATE_FORMAT option to the format required by the output file
format and then reopened the query to write the result set to the output
file. It then closed the query again, restored the old DATE_FORMAT option,
and reopened the query. Thus the query was executed a total of three times.
Note that the Java-based dbisql has always executed the query a total of
two times (once for the original query, once for the OUTPUT statement). That
behaviour is not addressed by this change.
The problem in dbisqlc has been corrected in most cases by avoiding the
close/set-date-format/reopen operations if the requested output file format
does not have a mandated date format or if the current date format matches
the date format required by the output file format. If the specified output
file format requires a specific date format and it is different from the
current date format, the query will still be executed two extra times. To
avoid executing the query multiple times, set the DATE_FORMAT option for
the current connection as listed below before executing the query for the
first time:
Output file format DATE_FORMAT setting
DBASEII MM/DD/YY
DBASEIII YYYMMDD
FOXPRO YYYMMDD
WATFILE YYYY/MM/DD
Output file formats not listed above do not have a mandated date format
and dbisqlc will not close/reopen the current query to execute the OUTPUT
statement.
================(Build #3815 - Engineering Case #552493)================
When rebuilding databases on Windows Mobile devices, the Unload utility (dbunload)
could have failed with the error "Table SYSPROCP not found". If
the unload was successful, reloading the new database by running the resulting
reload.sql file with the Script Execution utility (dbrunsql) could have failed
with the error "Cursor not open" when executing a call statement.
Both of these problem have now been corrected.
================(Build #3811 - Engineering Case #551820)================
If a PARAMETERS statement was run from an Interactive SQL window a dialog
was displayed prompting for the values. If the PARAMETERS statement was then
rerun, the prompt was not displayed, and the previously values were used.
This behavior was not intentional and has been corrected. Now, the prompt
is displayed each and every time the PARAMETER statement is executed.
================(Build #3805 - Engineering Case #551124)================
Typing completion would have inadvertently added the table's owner name when
completing a column name after a table alias in the following places:
1. The column list in a SELECT statement
select C. from customers C
^
2. The WHERE clause in a DELETE statement
delete from customers C where C.
^
3. The SET or WHERE clauses of an UPDATE statement
update Customers C set C.
^
update Customers C set C.City = 'Waterloo' where C.
^
This has been fixed by suppressing adding the owner's name if the inserted
text immediately follows a table alias.
================(Build #3794 - Engineering Case #549466)================
The "-host" and "-port" command line options were completely
ignored when connecting to a SQL Anywhere database. This problem also affected
the Console utility (dbconsole) as well. It has now been fixed.
As a workaround, use the "-c" command line option instead.
================(Build #3779 - Engineering Case #541310)================
If dbunload was used to attempt reloading a version 9 or earlier database
that needed recovery, the dbunload support engine would have failed an assert
and shut down. The assert failure has been fixed, but pre-version 10 databases
needing recovery still cannot be reloaded with dbunload. If such a reload
is attempted, dbunload will now display the error message "Failed to
autostart server". The database will need to be started using using
a pre-10 server, and if it then recovers successfully, it can be reloaded
after the pre-10 server is shut down.
================(Build #3773 - Engineering Case #545543)================
Unloading and reloading a 9.0.2 db could have failed with a 'capability not
found' error if the 9.0.2 db had remote servers defined and contains capabilities
that do not exist in later versions. This problem has now been fixed. The
unload/reload scripts now check for the existence of each capability in SYSCAPABILITYNAME
prior to issuing the ALTER SERVER ... CAPABILITY statement.
================(Build #3773 - Engineering Case #545251)================
The MobiLink Listener (dblsn), with IP tracking off (-ni) or default UDP
listening off (-nu), may have shutdown unexpectedly after the first notification.
This problem was introduced by the changes made for Engineering case 535235.
This has now fixed.
================(Build #3771 - Engineering Case #463887)================
Deployment Wizard installs containing ADO.Net components would have failed
without a good error message when trying to register the .Net components
on a system with no .Net framework installed. This has been fixed so that
the install now checks for the framework if it is required, and issues a
warning.
================(Build #3761 - Engineering Case #541053)================
A query containing an EXITS() predicate and returning distinct rows may have
generated an incorrect result set. For this to have occurred the EXISTS
predicate must have been able to be flattened into the main query block,
and a KEYSET root must have been used for the query. The incorrect result
set may contain duplicate rows. This problem has now been fixed.
================(Build #3761 - Engineering Case #539085)================
A number of changes have been made to improve the performance of the Interactive
SQL utility over networks with high latency. All the changes are related
to minimizing the number of server requests.
================(Build #3760 - Engineering Case #543248)================
If a comment was created for a primary key, the comment would not have been
unloaded by the Unload utility (dbunload). This has been fixed.
================(Build #3760 - Engineering Case #543231)================
The OK button on the "Connect" dialog could have failed to do anything
if all of the following were true:
1. The "Fast launcher" option was enabled
2. The "Connect" window was opened and left open in one DBISQL
window
3. A "Connect" window was opened and closed from another DBISQL
window
This has been fixed. To workaround the problem, close the "Connect"
window and reopen it.
================(Build #3744 - Engineering Case #540823)================
If a case-sensitive database was created prior to version 10 and was initialized
with collation 857TRK, the Unload utility (dbunload) would have failed to
unload it correctly. This has been fixed.
================(Build #3744 - Engineering Case #540800)================
The Interactive SQL utility (dbisqlc) could have incorrectly reported a syntax
error when executing an INPUT statement if the user or table name required
quoting to be a valid identifier. This has been fixed.
================(Build #3737 - Engineering Case #539356)================
The Interactive SQL utility (as well as all the graphical administration
tools) did not work with authenticated servers. This has been corrected.
================(Build #3736 - Engineering Case #534001)================
The Java Runtime Environments that are included in SQL Anywhere have been
updated to version 1.4.2_18 for 9.0.2 and 1.5.0_16 for 10.0.1. These updates
include a number of security fixes which do not directly impact SQL Anywhere
software, but were done to help those customers whose corporate policies
preclude the installation of older JRE updates which contain known security
defects.
In the future, customers will be able to update the JRE themselves by following
instructions which will be made available shortly.
================(Build #3731 - Engineering Case #538099)================
Attempting to copy a large number of values in the "Results" pane
could have caused Interactive SQL (dbisql) to crash. This has been foxed
so that an error message is now displayed and the copy is aborted.
================(Build #3727 - Engineering Case #536543)================
During diagnostic tracing, CONNECT and DISCONNECT request, as well as other
information, could have been missing for a connection. DISCONNECT request
were missing for some user connections, and CONNECT request and all statistics
were missing for internal connections. This has been fixed. As well, some
internal connections will be logged with both CONNECT and DISCONNECT requests,
while others will not be displayed.
================(Build #3695 - Engineering Case #529816)================
The changes for Engineering case 499958 did not cover all of the possible
pasting actions, and the Interactive SQL utility could still have crashed
in some cases. These other cases have now been fixed as well.
================(Build #3688 - Engineering Case #500319)================
On the Index Size tab of the Index Consultant, radio buttons for usable options
were displayed greyed out as if they were disabled. Other options corresponding
to radio buttons were enabled, and could have been selected. This has been
corrected so that the options do not appear greyed out when they are enabled.
================(Build #3688 - Engineering Case #499958)================
Pasting more than a couple of million characters into dbisql could have caused
the editor to become unresponsive, and eventually report an out of memory
error. This has been fixed so that a check is now done to determine if there
is enough memory to insert the text and display an error message if there
is not.
================(Build #3688 - Engineering Case #498533)================
The -onerror command line option was being ignored if the Interactive SQL
utility was not connected to a database. This has been fixed.
================(Build #3688 - Engineering Case #497495)================
The OUTPUT statement was writing DECIMAL numbers with thousands separators.
This was an inadvertent change from previous versions and caused an ambiguity
when writing ASCII files if the field separator was the same as the thousands
separator. This has been fixed.
================(Build #3685 - Engineering Case #494450)================
Global shared temporary tables were being unloaded as regular global temporary
tables (ie non-shared). This has been fixed.
================(Build #3683 - Engineering Case #498793)================
When the Interactive SQL utility was run in console mode, if there was an
error fetching rows from a result set, the cause of the error (if known)
was not being displayed. This has been fixed. Note that this problem existed
only in console mode; when run as a windowed program, the full error information
was displayed.
================(Build #3683 - Engineering Case #494583)================
The Index Consultant wizard leaves the Interactive SQL utilirt (dbisql) is
a state such that autocommit is on. This has been fixed.
================(Build #3678 - Engineering Case #496407)================
In exeptionally rare circumstances, the server may have crashed trying to
collect information about database pages to be loaded (cache warming) the
next time the database was started. This has been fixed.
================(Build #3676 - Engineering Case #497515)================
The Interactive SQL utility dbisql could have crashed on startup if it been
configured to enable source control support. The crash depended on using
the default Windows source control system (i.e. NOT the "custom"
option), and would only have occurred if the source control system asked
dbisql to display a message in response to opening the source control project.
This has now been fixed.
================(Build #3676 - Engineering Case #496538)================
The Interactive SQL utility dbisql would have reported an internal error
when attempting to open a SQL file which was larger than about 5 MB. This
has been fixed so that it now an reports an error saying that there was not
enough memory to open the file.
================(Build #3654 - Engineering Case #493442)================
Very large or very small numbers could have been displayed in exponential
notation. This was different from previous versions of the software where
numbers were displayed in plain decimal notation. Now, very large and very
small numbers are displayed in decimal notation again. Also, the numbers
that are written by the OUTPUT statement are now also similarly formatted
using normal decimal notation.
================(Build #3654 - Engineering Case #487014)================
Diagnostic tracing with
Scope: origin
Origin: external
Tracing type: statements_with_variables
Condition: none
would not have recorded any statements with an external origin. An incorrect
string comparison was being used to determine whether a statement needed
to be logged. This has been fixed.
================(Build #3652 - Engineering Case #487166)================
If a deadlock occurred in a database that had tracing with high levels of
detail attached, and the tracing data was saved and viewed from Profiling
mode in Sybase Central,
the primary keys for rows that had caused the deadlock would not have been
reported in the Deadlocks tab. This has been fixed.
This behaviour is only considered invalid if tracing data is saved in the
database that is being profiled. If tracing data is saved in an external
tracing database, primary key values for rows in the original database cannot
be reported.
================(Build #3638 - Engineering Case #491414)================
The Index Consultant wizard was not working correctly with the ENTER key.
The Default button was not properly set, and focus was not properly set on
some pages of the wizard.
This has been fixed.
================(Build #3637 - Engineering Case #489239)================
The Apache redirector did not support the Mobilink client's HTTP persistent
connection. Clients that attempted to use persistaent connections would have
been switched to non-persistent HTTP after the first request. This has been
corrected.
================(Build #3630 - Engineering Case #489238)================
When the Data Source utility dbdsn was used to create an ODBC data source
for the iAnywhere Solutions Oracle ODBC Driver (using the -or switch) on
UNIX, the driver name in the data source would have been incorrect (libdboraodbc10.so
rather than libdboraodbc10_r.so). This has now been corrected.
================(Build #3628 - Engineering Case #488859)================
The Interactive SQL utility's Index Consultant could have failed to recommend
indexes on a query containing the word "GO" in an identifier (For
example, SELECT * FROM CATEGORY), complaining that it could only analyze
one query at a time. This has been fixed.
================(Build #3620 - Engineering Case #487869)================
When using Text Completion, the list would not have contained any database
objects if opened immediately after an owner name which was followed by a
period, for example:
SELECT * FROM myUser.
Now, database objects are listed correctly. This problem only affected those
owners which did not own any stored procedures.
================(Build #3619 - Engineering Case #485979)================
A procedure that was used in the FROM clause of a SELECT statement, may have
returned the error "Derived table '<procname>' has no name for
column 2". This would have happened if the SELECT statement in the procedure
referenced a table without qualifying it with the owner, and only the procedure's
owner could select from the table without a qualifying owner (i.e. not the
user who executed the CREATE/ALTER PROCEDURE statement). This has
been fixed.
================(Build #3613 - Engineering Case #486896)================
The Deployment wizard did not deploy the utility dbspawn.exe when either
the Personal or Network servers were selected. This has been corrected by
adding dbspawn.exe to the "server core" list of files to be deployed.
================(Build #3611 - Engineering Case #485811)================
When the Interactive SQL utility dbisql was run as a console application
with bad command line options, its return code was always zero. This has
been corrected so that now it is 255, as documented:
SQL Anywhere® Server - Programming > Exit codes
Software component exit codes
================(Build #3603 - Engineering Case #485584)================
The Interactive SQL utility did not parse the "DESCRIBE objectName"
statement correctly unless it was executed on its own, and not part of a
sequence of statements. This has been fixed.
================(Build #3600 - Engineering Case #484964)================
The command line option that represents a memory value, used by all the database
tools including dbmlsync, dbremote, dbltm and the MobiLink Server, was not
recognizing "g" and "G" as valid characters for a gigabyte
of memory. This code is not used by the database server. This has been fixed,
so that "1G" or "1g" can now be specify as a valid memory
value. As a workaround, "1024M" can be used to represent a gigabyte
of memory.
================(Build #3597 - Engineering Case #484368)================
When attempting to unload and then reload a database created with an older
build, using a more recent build, if the database had Remote Data Access
servers defined, then there was a chance the reload could have failed with
the error: "Server capability name 'encrypt' could not be found in the
SYS.SYSCAPABILITYNAME table". This problem has now been fixed.
================(Build #3583 - Engineering Case #483314)================
Interactive SQL could have crashed if the menu item for a recently opened
file (at the bottom of the "File" menu) was clicked while a statement
was currently being executed. This has been fixed.
================(Build #3581 - Engineering Case #482833)================
The return code was not set correctly following an EXIT statement if it was
executed from a .SQL file, its argument was not a literal exception, and
the "Show multiple result sets" option was ON. That is,
SET TEMPORARY OPTION isql_show_multiple_result_sets='on';
EXIT 123;
worked, but
SET TEMPORARY OPTION isql_show_multiple_result_sets='on';
CREATE VARIABLE retcode INT;
SET retcode = 123;
EXIT retcode;
did not, the return code was always zero. This has been fixed.
================(Build #3581 - Engineering Case #482822)================
An INPUT statement could have failed if it referenced a table owned by the
current user and there was also a table with the same name which was owned
by a different user, and the owner was not given in the INPUT statement.
This has now been fixed.
================(Build #3578 - Engineering Case #481922)================
Backslashes in SQL remote options, such as in:
SET REMOTE "FILE" OPTION "PUBLIC"."Directory"
= '\\\\MACHINE\\Folder\\Subfolder';
were not being preserved when the database was unloaded. Given the above
option setting, the reload.sql file would have contained:
SET REMOTE "FILE" OPTION "PUBLIC"."Directory"
= '\\MACHINE\Folder\Subfolder';
which, on reload, would be incorrectly interpreted as "\MACHINE\Folder\Subfolder",
causing SQL Remote to fail. This has been corrected.
================(Build #3572 - Engineering Case #481738)================
If the Interactive SQL utility dbisql used the -onerror command line option
when connected to an authenticated server , the connection would not have
been authenticated. This would have caused some statements to have failed
with authentication errors after the grace period had expired. This has been
fixed.
================(Build #3563 - Engineering Case #480659)================
The Interactive SQL utility could have become unresponsive immediately after
loading a .SQL file. This problem would have been very rare and timing-dependent,
and would more likely have occurred on systems with fast processors. This
has been fixed.
================(Build #3554 - Engineering Case #479086)================
When importing data into an existing table, the "Preview" table
in the Import Wizard would have shown values in BINARY columns as "[B@"
followed by up to 8 numbers and letters. The value from the file is now correctly
displayed.
================(Build #3554 - Engineering Case #478915)================
The server may have crashed, or failed assertion 101518, if an UPDATE statement
contained multiple tables in the table-list, and columns from all the tables
were set to the same expression.
For example, in the following update the columns T1.b and T2.c are set to
the same expression @val
update T1, T2
set T1.b = @val, T2.c = @val
where T1.a = T2.a
This has been fixed.
================(Build #3552 - Engineering Case #478298)================
When running the Unload utility on a database that was the primary site for
a Sybase Replication Server, the REPLICATE ON clause was not being added
to the ALTER TABLE and ALTER PROCEDURE statements in the reload.sql file.
This has been fixed.
================(Build #3551 - Engineering Case #476693)================
It was not possible touse the INPUT statement to populate a GLOBAL TEMPORARY
table that did not preserve rows on commit. The INPUT statement had the unintentional
side-effect of committing changes. This has been fixed, INPUT no longer does
a COMMIT.
================(Build #3551 - Engineering Case #475259)================
When the variable "@@rowcount" was selected, the Interactive SQL
utility would always have displayed the value "1". This has been
corrected so that the actual value is now displayed.
================(Build #3549 - Engineering Case #477714)================
The ALTER TRIGGER SET HIDDEN statement incorrectly removed the table owner
from the
create trigger statement stored in the catalog. As a result, the reload
of a hidden trigger definition would have caused the SQL error: "Table
... not found", if the DBA user couldnot access the trigger without
table owner. This has been fixed.
In order to unload an existing database with this problem, the trigger definition
will need to be recreated and then hidden first.
================(Build #3542 - Engineering Case #476938)================
A query with surrounding brackets and a preceding common table expression,
would given a syntax error when used in a procedure. This is fixed. The
workaround is to remove the brackets.
================(Build #3536 - Engineering Case #475293)================
A client-side backup (ie using the Backup utility or calling the dbtools
function DBBackup) that truncated the log, could have created a backed-up
log file that was invalid. When this occurred the logfile would have appeared
to be too short; i.e., the end offset of the backed-up log did not match
the starting offset of the current log. This problem did not occur when
the log was not truncated, or when calling the BACKUP DATABASE statement.
This has been fixed.
================(Build #3533 - Engineering Case #475752)================
If a database involved in synchronization or replication was rebuilt using
the Unload utility, the truncation point values stored in the database could
have been set to values that would have caused transaction logs to be deleted
via the delete_old_logs option, when they ware still required. This has been
fixed. A database rebuilt using "dbunload -ar" will now have the
resulting database truncation points set to zero. These values will be set
more accurately the next time dbremote, dbmlsync, or dbltm is run. If the
database is rebuilt manually, the truncation points can be zeroed by running:
dblog -il -ir -is dbname.db
================(Build #3532 - Engineering Case #475452)================
If a remote server was defined with a USING string that contained single
quotes, the Unload utility would have generated a CREATE SERVER statement
that would have failed with a syntax error because the quotes were not doubled.
This has been fixed.
================(Build #3531 - Engineering Case #477738)================
On Unix systems, the certificate utilities viewcert and createcert, were
only being installed with ECC components and not with RSA components. Also,
libdbcecc10_r.so, the new ECC support library for viewcert and createcert,
was never installed at all. This has been fixed. On platforms where the utilities
are supported, the EBF installer will now install viewcert and createcert
if SQL Anywhere or MobiLink RSA libaries have been previously installed.
The library libdbcecc10_r.so will be installed only if SQL Anywhere or MobiLink
ECC libraries have been previously installed.
================(Build #3511 - Engineering Case #472619)================
The Unload utility would have reported a syntax error if the tsql_variables
option was set to 'on'. This option controls how variable references starting
with @ are handled. The unload script now temporarily sets the option to
the default value (off).
================(Build #3507 - Engineering Case #471802)================
The dbsupport utility can be used to submit, via the Internet, diagnostic
information and crash reports, or to submit feature statistics. When dbsupport
prompted the user to submit a particular crash report, and the user declined,
it still attempted to submit feature statistics, which is not desired. When
dbsupport was configured with "dbsupport -cc no", the intended
behavior is to not prompt for permission to submit reports, and to not submit
reports. Although the crash reports were not being submitted, dbsupport
still attempted to submit feature statistics, which is also not desired.
These issues have been fixed.
================(Build #3507 - Engineering Case #471710)================
When submitting an error report with dbsupport, minidumps larger than 1Mb
in size would silently have been omitted from the submission. Only the crash
log would have been included. Also, when printing an error report with dbsupport,
minidumps larger than 1Mb in size would not have been printed. These issues
would typically only have occurred on Unix systems, since Windows minidumps
are much smaller. This has been fixed.
================(Build #3506 - Engineering Case #470660)================
Items in the "Edit" menu were not enabled and did not perform consistently:
1. The "Edit/Copy" menu item is now enabled if a results table
has focus and at least one row is selected.
2. If the Results panel is selected, the "Edit/Paste" menu item
is no longer enabled.
3. If a results table was focused, the "Edit/Delete" menu item
could have been clicked, but nothing would have happened. Now, the menu item
will cause the row to be deleted, the same as if the DELETE key had been
pressed.
4. The "Cut", "Undo", "Redo", "Find/Replace",
"Find Next", and "Go To" menu items are now enabled only
if the "SQL Statement" field has focus. Previously, they were enabled
even if the "Results" panel was selected.
================(Build #3503 - Engineering Case #470854)================
When printing an error report to the console with "dbsupport -pc",
binary data is represented in the form of a hex dump. For non-printable characters,
the hexadecimal representation was incorrect. This has been fixed.
================(Build #3502 - Engineering Case #470050)================
It was possible when rebuilding a database (version 10 or earlier) to version
10 using dbunload -an, -ar or -ac, that the new database would have been
missing some objects. This would only have occurred if the database being
unloaded contained GLOBAL TEMPORARY TABLES that had foreign keys, or comments
using COMMENT ON, on them. The resulting database could have been missing
indexes and/or comments on indexes for any tables, including base tables,
that were to be created after those for the GLOBAL TEMPORARY TABLE. This
has now been fixed. The fix requires a new dbtools library, as well as the
updated script files, unload.sql and unloadold.sql.
================(Build #3500 - Engineering Case #471155)================
The Unload utility could have crashed if the userid specified in the -ac
connection parameter was 'dba', but used a non-default password (i.e. not
'sql'), and the userid of the source connection (i.e., -c parameter) was
'dba' as well. This has been fixed.
Note, it is recommended that when using -ac that the destination database
have default dba credentials. The password can then be changed after dbunload
completes.
================(Build #3499 - Engineering Case #470058)================
If a database contained a user-defined message that included an embedded
quote, attempting to rebuild the database would have failed when the resulting
CREATE MESSAGE statement was executed. This has been fixed.
================(Build #3498 - Engineering Case #469896)================
Selecting a different value in the "Encoding" combobox did not
correctly update the preview of table data in the Import Wizard. This has
been corrected.
================(Build #3496 - Engineering Case #468878)================
After executing a statement, the toolbar buttons could have remained disabled,
even though they should have been enabled. This problem was timing dependent,
so it tended to appear on some machines from time to time, but not on others
at all. This has now been fixed.
Note that the menu items and accelerator keys corresponding to the toolbar
buttons were enabled correctly and were functional.
================(Build #3494 - Engineering Case #461299)================
When using the Unload utility dbunload and doing an external unload of a
database to files (i.e., using -xx or -xi and specifying an unload directory)
on a system with a multi-byte OS charset, then the unloaded character data
could have been inconsistently encoded, or incorrectly unloaded. For this
problem to have occurred, a connection charset of something other than the
OS charset must have been specified. Fr data to be incorrectly unloaded,
t specified charset needed to be one of Windows-31j (sjis/cp932), GBK (cp936),
GB18030, cp950, big5-hkscs, or cp949, and the character data needed to contain
the value 0x5c (i.e. an ASCII backslash) as a follow-byte. Otherwise, the
unloaded data may just have been inconsistently encoded into hex (i.e., some
bytes encoded as '\xDD', and other bytes left as is) characters. While the
10.x fix addresses this problem for any unload connection charset, the fix
for earlier versions addresses this problem only when the specified connection
charset is either the OS or the database charset.
It is generally recommended that the database charset be used for unloading
as it avoids both the overhead of unnecessary charset conversion and potentially
lossy conversions.
================(Build #3493 - Engineering Case #468882)================
In certain situations, for example, when the database server was shutting
down, DBLauncher may not have displayed all of the messages it received
from the database server, making it look like the database server had hung.
This has been fixed.
================(Build #3489 - Engineering Case #467789)================
Interrupting a SQL statement would not have also aborted any subsequent statements
which were pending execution. For example, if a number of statements were
entered in the "SQL Statements" field each separated by the command
delimiterand then executed, attempting to abort them by clicking the "interrupt"
toolbar button, would only have aborted the statement currently executing
when the button was aborted; the remaining statements would have continued
to be executed. This has been fixed so that the remaining statements are
not executed.
================(Build #3489 - Engineering Case #467783)================
When executing a statement, the "Execution time" message displayed
in the "Messages" pane could have been displayed before all of
the asynchronous messages caused by a statement had been displayed. This
has been corrected to that the execution time message follows the asynchronous
messages. Asynchronous messages are those generated explicitly by the MESSAGE
statement, or implicitly by CREATE DATABASE.
================(Build #3488 - Engineering Case #467621)================
The Server Licensing utility, dblic, is used to modify the server's licensing
information. In version 10.0.1, with the introduction of license files, dblic
operates on the license file instead of the server executable. As an extension,
it is possible to still specify the name of the server executable as the
argument to dblic. When doing this though, the contents of the license file
was being written out over top of the server executable, rendering it unusable.
This has been fixed so that it is now possible to specify either the name
of the license file or the server executable and dblic will modify only the
license file.
================(Build #3485 - Engineering Case #467145)================
The keybaord accelerator for the "SQL/Stop" menu item was displayed
as "Ctrl+Clear", even though there is no "Clear" key
on most keyboards. This text has been changed to read "Ctrl+Break".
================(Build #3483 - Engineering Case #466265)================
When unloading a database, either with the Unload utility or the UNLOAD statement,
DDL statements to create a non-primary key index on a global temporary table
would not have been written to the reload.sql file. This has been fixed
and proper DDL to re-create the index is now written to the reload file.
================(Build #3477 - Engineering Case #465405)================
The Edit/Select All menu item would only have operated on the contents of
the "SQL Statements" field. The intended behavior is that this
operation select the contents of the active pane. This has been fixed so
that if a result table has focus, the Edit/Select All menu item selects all
of the rows it contains.
================(Build #3475 - Engineering Case #464838)================
It was possible for the Interactive SQL utility to crash (NullPointerException)
if the window was closed while editing table data. This has been fixed.
================(Build #3474 - Engineering Case #464341)================
Cancelling a statement containing a long-running function call would have
appeared to succeed (i.e. the "Execute" menu item and toolbar button
were enabled), but the statement might have been left running if the function
was executed as a result of fetching rows from the statement's result set.
This has been fixed.
================(Build #3474 - Engineering Case #463740)================
Interactive SQL could have reported an internal error (ClassCastException)
when importing UNIQUEIDENTIFIER data into UltraLite databases. This would
only have happened when running the Import wizard if there was already a
result set showing. This has now been fixed.
================(Build #3472 - Engineering Case #462446)================
The Interactive SQL utility could have reported an out-of-memory error when
executing some large .SQL files or statements, which contained Transact-SQL
CREATE PROCEDURE or IF statements. This has been fixed.
================(Build #3470 - Engineering Case #462661)================
If a user was granted BACKUP or VALIDATE authority, this authority would
have been lost after rebuilding the database using the Unload utility. This
has been fixed.
================(Build #3470 - Engineering Case #461274)================
If connection parameters passed to the Unload utility using the -c or -ac
command line options failed to parse, it would have shutdown without displaying
any error message. This has been fixed.
================(Build #3470 - Engineering Case #455668)================
It was possible for the database server to crash when attempting to run a
corrupted database file. Assertion 201418 'Row (page:index) has an invalid
offset' has now been added to detect this corruption.
================(Build #3419 - Engineering Case #468631)================
All of the GUI applications shipped with SQL Anywhere for Mac OS X, such
as the Interactive SQL utility and DBLauncher, would have stopped working
after applying
updates to Mac OS X, specifically Security Update 2007-004. The applications
would either have crashed, or displayed a message similar to the following:
The library dbput9_r could not be loaded. This may be because the provider
is being re-loaded (in which case you need to restart the viewer) or because
the library could not be found in the Adaptive Server Anywhere installation.
Service management will not be available.
or:
Link (dyld) error: Library not loaded: libdbserv9_r.dylib Referenced from:
/Applications/SQLAnywhere9/System/bin/dbsrv9 Reason: image not found
This has been corrected.
================(Build #3692 - Engineering Case #528978)================
The SQL Remote Message Agent (dbremote) could have displayed the error, "SQL
statement failed: (-260) Variable 'n?' not found", where ? was an integer
greater than or equal to 10, if a replication table contained more than 9
columns (they could have beeb CHAR, BINARY or BIT type columns) with a data
length greater than 256 bytes. This problem has now been fixed.
================(Build #3927 - Engineering Case #581542)================
If a database had a schema with a very large number of columnsa, it was possible
that when dbxtract was run, that the select statement generated to extract
the data from some of the tables would have used an incorrect column order.
This would likely have resulted in the rebuild failing, since the data types
of the columns in the new database would not match the data that have been
extracted from the remote database. The problem has now been fixed.
Note that this problem only affected dbxtract, and not dbunload.
================(Build #4039 - Engineering Case #619254)================
If a database had been initialized with the UCA collation sequence, and to
respect accent sensitivity on all UCA string comparisons, it was likely that
operations on tables without a SUBSCRIBE BY clause in the publication definition
would have failed to replicate. No errors would have been reported, but operations
that should have replicated would not have been sent. This has now been fixed.
================(Build #4020 - Engineering Case #616829)================
If the SQL Remote Message Agent (dbremote) connected to a database that had
remote or consolidated users defined, but did not have a publisher defined,
then dbremote would not have reported any errors, but would have simply reported
"Execution Complete". This has been corrected so that dbremote
will now report an error indicating that no publisher was defined in the
database.
================(Build #3990 - Engineering Case #467100)================
When very rare, a client application using a shared memory connection could
have hung forever while executing a statement. This has been fixed.
================(Build #3982 - Engineering Case #596641)================
If all of the following conditions were met, then SQL Remote would have continued
to hold locks in the database until the next time that it needed to send
the results of a SYNCHRONIZE SUBSCRIPTION to a remote database:
1) SQL Remote was running in send-only mode (-s)
2) SQL Remote was running in continuous mode
3) SQL Remote was satisfying a resend request for user "X", and
was forced to re-scan the transaction logs
4) While scanning the transaction log, a SYNCHRONIZE SUBSCRIPTION operation
was scanned for a user "Y"
5) User "Y" had already been sent the results of the SYNCHRONIZE
SUBSCRIPTION operation in a previous run of SQL Remote.
This has been fixed by releasing the locks when the send phase of dbremote
reaches the end of the transaction log and determines that the SYNCHRONIZE
SUBSCRIPTION operation does not need to be sent to the remote user.
The problem can be worked around by stopping and starting the dbremote process
that was running in send-only mode.
================(Build #3718 - Engineering Case #479191)================
When SQL Remote was scanning the active transaction to determine which operations
to send, it was possible for the process to have crashed when it had reached
the end of the active transaction log. This has been fixed.
================(Build #3691 - Engineering Case #528650)================
If the precision for a DECIMAL or NUMERIC column was greater than 30, then
SQL Remote and RepAgent would only have replicated up to 30 digits in accuracy
to a remote or secondary database, and the Log Translation utility (dbtran)
might only have written 30 digits in accuracy to a SQL file. The rest of
the digits would have been replaced by zero. This problem has been fixed.
The accuracy of the replicated/translated data should now be as high as it
was stored in the original database.
================(Build #3734 - Engineering Case #536374)================
HotSync may have logged a -305 error for a synchronization failure, and potentially
other database corruption. This has been fixed.
================(Build #4173 - Engineering Case #655358)================
During the loading of data using the UltraLite Load XML to Database utility
(ulload), columns whose values were NULL may have been set to default values.
This has been fixed.
================(Build #4140 - Engineering Case #646356)================
The error SQLE_MEMORY_ERROR could have been reported on Windows Mobile devices
when a removable media card was ejected, or when the device returned from
standby. The operation should have been silently retried for a few seconds
and then SQLE_DEVICE_IO_FAILED reporting if the operation still failed. This
has now been corrected.
================(Build #4111 - Engineering Case #636651)================
The use of START AT or FIRST in a subquery may have resulted in incorrect
results. This was corrected.
================(Build #3977 - Engineering Case #594913)================
UltraLite primary key constraints must be named "primary". This
requirement was not being enforced when the primary key was defined. This
has been corrected so that now it is. Databases that have primary key constraints
not named "primary" should be rebuilt.
================(Build #3970 - Engineering Case #592873)================
By specifying the primary key in a separate clause of the CREATE TABLE statement,
the UltraLite runtime allowed tables to be created with Long column types
(ie. BLOBS, CLOBS) as primary keys.
For example: 'CREATE TABLE t1( v1 LONG VARCHAR, PRIMARY KEY(v1))'
The use of long datatypes in indexes is not supported by UltraLite, and
inserting into the resulting table would have resulted in a crash. This has
been corrected, long datatypes are now flagged as invalid when used in an
index.
================(Build #3946 - Engineering Case #561596)================
Incorrect values were being used by the SQL scanner for some hexadeciml constants.
This was corrected
================(Build #3928 - Engineering Case #581269)================
There were no implementations for the datatypes LONG BINARY and LONG VARCHAR.
These have now been added.
================(Build #3924 - Engineering Case #580353)================
Queries with TOP or START AT clauses were incorrectly flagged with an error
when they exceeded 64K. This restriction has now been removed.
================(Build #3921 - Engineering Case #580016)================
Unexpected behavior could have occurred when IN predicates were used in subqueries
in INSERT, UPDATE, and DELETE statements. This has been corrected.
================(Build #3854 - Engineering Case #562245)================
Using the MobiLink file-based download to transfer files on Palm devices
to a VFS volume, could have failed. The error reported would have been STREAM_ERROR_INTERNAL.
On some devices, VFSFileSeek returns an EOF error when seeking to the end
of file, but the seek actually succeeds. A work around for this problem has
been implemented.
================(Build #3841 - Engineering Case #557837)================
Incorrect results were possible when there was both an equality condition
and another redundant conjunctive expression such as:
x >= 2 AND x = 3
with only a column name on one side of the comparison. That column name
must also have been the first column in an index. This has now been fixed.
================(Build #3838 - Engineering Case #557685)================
UltraLite and UltraLiteJ erroneously accepted DEFAULT TIMESTAMP as a clause
in the CREATE TABLE and ALTER TABLE statements, and treated the clause as
if DEFAULT CURRENT TIMESTAMP had been entered. Attempts to execute CREATE
TABLE or ALTER TABLE statements with this clause will now result in a syntax
error.
================(Build #3835 - Engineering Case #556539)================
UltraLite and UltraLiteJ were not able to recognize a correlation name following
the table name in UPDATE and DELETE statements. Without this ability, WHERE
clauses that require the correlation name to disambiguate column references
could not be written.
For example:
update Employee E
set salary = salary * 1.05
where EXISTS( SELECT 1 FROM Sales S HAVING E.Sales > Avg( S.sales)
GROUP by S.dept_no )
The syntax for UPDATE and DELETE statements was been expanded to correct
this.
================(Build #3809 - Engineering Case #551692)================
If a query consisted of a non-zero number of UNION DISTINCT operations, followed
by a non-zero number UNION ALL operations, the result set could have had
twice as many columns as were specified by the query. The leftmost columns
would have been correct, while the rightmost extra columns were bogus. The
alorithm for creating the selection list for the overall query was flawed,
and has now been corrected.
================(Build #3785 - Engineering Case #547234)================
When the UltraLite SQL functions length() and char_length() were used on
LONG VARCHAR columns, the results were incorrectly the lengths of the strings
in bytes, rather than characters. Note that some characters require multiple
bytes internally. The function byte_length() is used to determine the length
in bytes of the string. This has been fixed.
================(Build #3779 - Engineering Case #495369)================
The performance of some queries has degraded from what it was in version
9. Further optimizations have been added so that the performance has been
restored.
================(Build #3688 - Engineering Case #500482)================
Incorrect diagnostics could have been generated when there were comma-specified
joins followed by operation-specified joins (LEFT OUTER JOIN, for example),
and an ON condition in the operation-specified joins referenced a column
from the comma-separated joins. This was now bee corrected.
A work-around is to place the comma-separated table expressions in parentheses.
For a query such as:
SELECT * FROM tab1, tab2, tab2 LEFT OUTER JOIN tab4 ON tab4.x = tab1.y
the work-around is to rewrite the query as:
SELECT * FROM ( tab1, tab2, tab2 ) LEFT OUTER JOIN tab4 ON tab4.x = tab1.y
================(Build #3664 - Engineering Case #495240)================
If an ALTER <column> statement encountered an error, subsequent statements
could have erroneously failed with the error SQLE_SCHEMA_UPGRADE_NOT_ALLOWED,
and/or the runtime could have experienced a crash at some later point. This
has been fixed.
================(Build #3664 - Engineering Case #494963)================
When there were at least three joins in a query table expression and there
was a reference from the ON condition to a column in a table at least three
preceding, an incorrect syntax error may have occurred. This has been corrected.
================(Build #3659 - Engineering Case #494259)================
A column could have been altered to have a different datatype, even when
column was in a foreign key or constraint. This is now disallowed.
================(Build #3655 - Engineering Case #493762)================
Incorrect results could be obtained for for some queries that used indexes
in which there was more than one nullable column. This was corrected.
================(Build #3653 - Engineering Case #493478)================
Incorrect results could have been returned for for some queries with row
limitation (using FIRST, TOP, and/or START AT clauses), when a query was
not read-only and when a temporary table was required to execute the query.
For example:
SELECT TOP 14 * FROM table ORDER BY table.column
when there was no index that could be used to order the data. This has now
been fixed.
================(Build #3650 - Engineering Case #492675)================
Incorrect results could have been returned for for some DISTINCT ORDER BY
combinations. This would have occurred when a DISTINCT clause was used and
there were no unique indexes that could be used to guarantee distinctness,
there was an ORDER BY clause and no indexes exist to effect that ordering
and not all of the the ORDER BY constituents were found in the SELECT list.
For example:
SELECT DISTINCT last_name FROM people ORDER BY birth_date
This has now been corrected.
================(Build #3649 - Engineering Case #492344)================
An erroneous conversion error could have been detected when an IF expression
involved an aggregate. For example: "IF count(n) > 50 THEN 'good'
ELSE 'bad' ENDIF" This has been corrected.
================(Build #3646 - Engineering Case #492031)================
In order to drop a table from the database, it must first be removed from
all publications. Failing to remove the table from any publications prior
to attempting to drop it would have resulted in an error. However, the UltaLite
database would have been left in a corrupt state after the error was returned,
as the operation was not fully rolled back. This has now been fixed.
================(Build #3626 - Engineering Case #488699)================
Table expressions with brackets may have caused syntax errors. For example,
Select * from (table1) left join table2
This was corrected by adjusting the syntax to handle more general bracketing.
================(Build #3622 - Engineering Case #488275)================
It was possible for downloaded rows that contained long varchar or long binary
columns to have been corrupted. Symptoms ranged from garbage characters
read from a row to crashing the database. The problems were caused by an
uninitialized variable, so the operations that can trigger the bug were varied.
This has now been corrected.
================(Build #3611 - Engineering Case #486556)================
When running on a slow network an UltraLite application could have failes
with the error message 'Internal Error (1003)'. This problem has now been
fixed. This change is similar to the fix for the MiobiLink client, Engineering
case 486446.
================(Build #3605 - Engineering Case #485815)================
The DATEADD() function did not detect overflow situations. This has been
corrected.
================(Build #3599 - Engineering Case #485004)================
A constant at the start of an aggregate selection could have caused erroneous
results. An example would be:
SELECT 999, count(*) FROM TABLE
where the incorrect result was a row for each row in the table, instead
of a single row. Constants were not being marked as aggregates when they
occurred in aggregate selections. This has been fixed.
================(Build #3596 - Engineering Case #484452)================
It was possible for UltraLite to allow duplicate entries into unique key
indexes, or it could have incorrectly reported a duplicate entry in a unique
key. For this to have occurred, a table would need to have been left open
while many hundreds of updates were happening to the same row, as well as
many other inserts and deletes needed to occur to other tables concurrently.
This is now fixed.
================(Build #3573 - Engineering Case #481254)================
An UPDATE statement would have updated only one row, even if more rows satisfied
the WHERE conditions, if the column in the WHERE clause was indexed.
For example, consider a table with the following schema:
CREATE TABLE Tab1( pk int not null primary key, value varchar(1) not null
)
CREATE INDEX val_index ON Tab1( value )
The following statement could possibly update more than one row:
UPDATE Tab1 SET value = ‘x’ WHERE value = ‘y’
However, since there is an index on value, this update would have only updated
at most one row. This has been fixed and it will now update all qualifying
rows.
================(Build #3570 - Engineering Case #481467)================
Calling the byte_length() function with an integer value for the parameter,
would have returned a value inconsistent with SQL Anywhere server. This has
been corrected.
================(Build #3570 - Engineering Case #481427)================
Calling the byte_length() function with a NULL for the parameter would have
returned a random value instead of NULL. This has been corrected.
================(Build #3569 - Engineering Case #481432)================
The changes to the UltraLite runtime for Engineering case 480878, caused
it to not send the upload progress in the first synchronization, which is
what the MobiLink server expects. However, this change also caused the runtime
to stopped sending the last download timestamp on the first synchronization
as well. This resulted in the MobiLink server using a default timestamp of
0000-01-00 00:00:00.000000, which could be rejected by the consolidated database
as an invalid timestamp. This has been fixed so that the runtime now sends
the last download timestamp on the first synchronization, but not the upload
progress.
================(Build #3564 - Engineering Case #480878)================
Synchronization of a recreated database could have fail if the remote id
was still the same. By default a new database will have a randomly generated
UUID for a remote id. This has been fixed.
================(Build #3550 - Engineering Case #476708)================
The problem requires the schema of the UltraLite database to have a table
with a unique constraint (other than the primary key) and another table referencing
that unique constraint with a foreign key. When a delete is downloaded, all
rows referencing the deleted row (via a foreign key to the table) should
also be deleted to maintain referential integrity. This was working properly;
however, a similar scenario existed if an update was downloaded and the foreign
key referencing the row did so by referencing a unique constraint instead
of the primary key. This has been fixed.
================(Build #3539 - Engineering Case #476195)================
Erroneous results could have been returned for a query with both START AT
and TOP clauses. This has been corrected.
================(Build #3537 - Engineering Case #476372)================
Synchronizing more than one database at a time using encryption or compression,
could have caused the UltaLite runtime to crash. This has been fixed.
================(Build #3536 - Engineering Case #476112)================
Erroneous results could have been returned for a query with an inner join
and a WHERE clause containing a subquery. This has been corrected.
For example:
SELECT COUNT(*) FROM kb_baseclass INNER JOIN customer ON kb_baseclass=customer
WHERE customer IN (SELECT client FROM contact)
================(Build #3530 - Engineering Case #475138)================
Erroneous results could have been obtained for queries with joins containing
a derived table with GROUP BY, and a WHERE clause referencing a column from
the derived table. This has been corrected.
================(Build #3523 - Engineering Case #473449)================
Database file space allocated for long varchar or long binary columns was
not properly reclaimed on deletion of the row in some cases. As a result,
the database file could have grown unexpectedly. This has been fixed. Applying
this fix will prevent further unnecessary growth but not reclaim the lost
space within the database file. To reclaim space, the database file must
be recreated.
================(Build #3511 - Engineering Case #472364)================
UltraLite allows a publication to be created with a predicate for each of
its tables. This allows users to filter rows in a table being synchronized.
If the predicate contained a subquery, it was possible that the predicate
evaluated to the wrong result, either allowing all rows to be uploaded, or
none to be uploaded. This has been fixed.
Also, note that there are two errors in the documentation. Under UltraLite
– Database Management and Reference/Working with UltraLite Databases/Working
with UltraLite publications/Publishing a subset of rows from an UltraLite
table, the following are incorrect:
"Palm OS: You cannot use a CREATE PUBLICATION statement with a WHERE
clause on this platform."
In fact, a WHERE clause can be used on Palm OS.
The paragraph "What you cannot use in a WHERE clause"
In fact, you can use columns from tables not in the article (or even not
in the publication). You can also use subqueries.
================(Build #3502 - Engineering Case #470176)================
There was no implementation for the ODBC function SQLSynchronizeW() in the
Runtime, even though it is defined in ulodbc.h. This has been corrected by
implementing SQLSynchronizeW().
================(Build #3502 - Engineering Case #466683)================
The runtime could have crashed at the end of an HTTPS synchronization. This
has been fixed.
================(Build #3499 - Engineering Case #469974)================
An application would have failed to autostart the engine when using a quoted
StartLine value that contained spaces in the path. For example, the following
startline would have failed with SQLE_UNABLE_TO_CONNECT_OR_START:
StartLine="\Program Files\uleng10.exe"
This problem has been fixed.
This problem can be worked-around by making the opening quote the second
character:
StartLine=\"Program Files\uleng10.exe"
or by enclosing the entire quoted value in single quotes:
StartLine='"\Program Files\uleng10.exe"'
================(Build #3491 - Engineering Case #468456)================
For certain MobiLink server errors, such as authentication failure, a second
error may have appeared later in the log: "Download failed with client
error xxx" when download acks had been turned on. This could have been
confusing, since it suggested the error originated on the client, when the
true error was reported further up in the server log. This has been corrected;
the second error message will no longer appear.
================(Build #3486 - Engineering Case #467268)================
Synchronizations that took longer than ten minutes could have been timed-out
by the MobiLink server, if the synchronization parameter 'timeout' was set
to zero. MobiLink clients send keep-alive bytes to the MobiLink server at
an interval of half the timeout value to keep the connection active, UltraLite
was not sending these bytes if the timeout value was set to zero. This has
been fixed.
================(Build #3482 - Engineering Case #466456)================
If an application had an open cursor, and another transaction deleted a row
that affected that cursor, it was possible for the cursor to have been positioned
on the wrong row. This has been fixed.
================(Build #3478 - Engineering Case #465693)================
If the ExecuteQuery method detected an error, a non-null ResultSet could
still have been returned. This was corrected.
================(Build #3475 - Engineering Case #465368)================
If the Timeout synchronization parameter was set to a value that was too
low, and a TLS or HTTPS synchronization was being done over a slow channel,
the runtime may have attempted to send a liveness packet before the TLS handshake
had been completed, causing the synchronization to fail in a number of different
ways. The MobiLink server may have reported a handshake or protocol error,
or the client could have crashed. This has been fixed
================(Build #3475 - Engineering Case #465151)================
The UltraLite runtime would have accepted the empty string as a valid script
version. This has been fixed. The empty string is now rejected, just like
if it had been set to NULL.
================(Build #3474 - Engineering Case #464837)================
A synchronization would have failed if it used publications, and the runtime
did not know if the MobiLink server had received the upload of the previous
synchronization. This has been fixed.
================(Build #3474 - Engineering Case #464473)================
When executing the statement ALTER TABLE ADD FOREIGN KEY there was not check
that the new foreign key rows all had matching primary rows. A check has
been added so that the statement will now fail if a primary row is missing.
================(Build #3470 - Engineering Case #460942)================
Table names exceeding 128 characters were not handled correctly. In particular,
they were being improperly trunctated, allowing duplicate tables with the
same apparent name to be created. Such table names are now diagnosed as
syntax errors.
================(Build #3470 - Engineering Case #460745)================
As of the release of SQL Anywhere 10.0, synchronization concurrency was reduced.
In particular, other threads and processes were blocked from entering the
UltraLite runtime during upload, and while waiting for MobiLink to create
the download. This has been fixed.
================(Build #3470 - Engineering Case #456783)================
When the DEFAULT clause specified a character string for a column in a CREATE
or ALTER table statement, and the length of that character string exceeds
126 characters, the characters string was truncated to 125 characters, instead
of 126 characters. This has been corrected.
================(Build #3470 - Engineering Case #456698)================
The special value NULL, used by itself in certain SELECT expressions, could
have given incorrect results. For example, the query "select NULL union
select NULL" did not give the correct results. This has been corrected.
================(Build #3470 - Engineering Case #456644)================
Calling UltraLite_ResultSet.Set did not set NOT NULL when a value was supplied.
This could have also have affected the Set methods in various components.
This has been fixed.
================(Build #3470 - Engineering Case #456070)================
If CreateDatabase was called to create an encrypted database, and an encryption
key waspassed via the connection parameters but ULEnableStrongEncryption
wasn't called, CreateDatabase would have created an unencrypted database
without reporting any warnings or errors. This has been fixed so that CreateDatabase
will now fail in this situation with SQLE_ENCRYPTION_NOT_ENABLED. When attempting
to connect to an encrypted database and an encryption key was provided, but
ULEnableStrongEncryption wasn't called, the runtime would have reported SQLE_BAD_ENCRYPTION_KEY,
which could have been misleading. This has been corrected so that the runtime
will now report SQLE_ENCRYPTION_NOT_ENABLED for this as well. Also, when
attempting to connect to an unencrypted database and an encryption key is
provided, but ULEnableStrongEncryption isn't called, the connection will
still succeed, but the runtime will now report a warning SQLE_ENCRYPTION_NOT_ENABLED_WARNING.
Note that these changes only apply to applications that use the C++ or embedded
SQL interfaces and don't use the UltraLite engine. Applications that use
the engine, or any of the components, did not have these problems since they
always call ULEnableStrongEncryption internally.
================(Build #3437 - Engineering Case #463517)================
On Windows CE devices, database corruption could have occurred due to a bug
in Windows CE related to growing the database file. A change to the runtime
to close and reopen the file after growing it has been implemented in an
attempt to work around the problem.
================(Build #3503 - Engineering Case #470582)================
The error 'Function or Column name reference "Unknown" must also
appear in GROUP BY', may have been erroneously raised when an IF expression
involved an aggregate function.
For example:
IF count(n) > 50 THEN 'good' ELSE 'bad' ENDIF
This has been corrected.
================(Build #4063 - Engineering Case #627238)================
The Include directory reference for the UltraLite CustDB sample on Windows
Mobile 6 platforms was incorrect. The Windows Mobile 6 platform Include directory
pointed to the %sqlanyX%\h directory, but should have pointed to %sqlanyX%\SDK\Include.
While correcting this, it was noted that there were inconsistencies in the
quoting of the Include directory across the projects. Now all references
are quoted.
================(Build #4061 - Engineering Case #626666)================
The UltraLite CustDB sample application About dialog incorrectly stated "for
Windows CE" for all Windows platforms. This has been removed from the
dialog text.
================(Build #4210 - Engineering Case #666232)================
The "Go To Table" and "Go To Foreign Key" menu items
for the ER Diagram tab did not always work. The "Go To Table" menu
item did nothing if the database tree node was collapsed. The "Go To
Foreign Key" menu item did not work at all. These have now been fixed.
================(Build #4162 - Engineering Case #652188)================
When a table was selected in the tree and the Indexes tab was shown in the
right pane, there was no File -> New -> Index... menu item. As such,
it was only possible to start the Index wizard from the toolbar button. This
has been fixed.
================(Build #4132 - Engineering Case #643953)================
In the Extract Database wizard for UltraLite, when checking or unchecking
publications using the space bar instead of the mouse, the Next button would
not have been enabled or disabled appropriately. This has been fixed.
================(Build #4067 - Engineering Case #628275)================
Clicking Finish in the Load Database wizard would have caused Sybase Central
to crash if a database id was not specified on the last page of the wizard.
This has been fixed.
================(Build #4061 - Engineering Case #626662)================
The Load Database wizard would crash on a second attempt if the first attempt
failed, or was canceled, before completion. This has been fixed.
================(Build #4048 - Engineering Case #623481)================
If the Set Primary Key wizard was proceeded through without making any changes
and the Finish button was clicked, then an extraneous error message would
have been shown. Now the wizard simply closes.
================(Build #4047 - Engineering Case #623280)================
Renaming a table using its property sheet, and then attempting to open a
column's details on the Property sheet's Columns tab, would have caused Sybase
Central to crash. This has been fixed.
================(Build #3785 - Engineering Case #547224)================
When selecting data using the Interactive SQL utility, from an UltraLite
database that was UTF8 encoded, it was possible for extra garbage characters
to have be displayed at the end of the string. For example, if a VARCHAR
column contained the string 'für' (the middle letter is u umlaut) and the
database was UTF8 encoded, selecting that column would have displayed a garbage
character at the end (typically a box). Note that this was a display problem
only. This has been fixed. A possible work around is to cast the data as
VARCHAR(x), where x is a value big enough to display the data.
================(Build #3664 - Engineering Case #494155)================
UltraLite requires that each table have a primary index. Using SQL statements,
it was possible to remove or rename this primary key, which would eventually
have lead to a crash of the UltraLite application. Attempting to remove or
rename a table's primary key will now result in an error.
================(Build #3660 - Engineering Case #494579)================
When changing a column's DEFAULT value to "No default value", a
syntax error would have been reported. The plugin was incorrectly executing
the statement ALTER TABLE t ALTER c DROP DEFAULT. This has been fixed so
that the plugin now uses the correct syntax ALTER TABLE t ALTER c DEFAULT
NULL.
================(Build #3656 - Engineering Case #493741)================
An attempt to alter a column to DEFAULT NULL would have been ignored. This
has now been corrected.
================(Build #3635 - Engineering Case #490471)================
When connected to two UltraLite databases and attempting to unload the first
into the second, when the Finish was clicked on the page that asked for the
destination database, the wizard would not have noticed the change in the
selected destination database on that page and then tried to load into the
default destination database, which in this case was the same database as
the source database. This has been fixed so that the wizard now correctly
records the selection, and/or correctly pops up a dialog letting the user
know that the source database is the same as the destination database.
================(Build #3635 - Engineering Case #490312)================
Attempting to unload a database using the Sybase Central Unload wizard, that
was not currently connected to, into an XML or SQL file would have failed
with a null pointer exception. This has been fixed.
================(Build #3634 - Engineering Case #490210)================
When viewing the properties of a table in the UltraLite plug-in for Sybase
Central, it was possible to change the table’s synchronization type of Normal,
Always or Never. Doing this would have created a new table with a different
suffix (either empty, _nosync or _allsync), however the original table would
not have been dropped. This has been fixed.
================(Build #3634 - Engineering Case #489538)================
When examining the properties of a foreign key that was created as CHECK
ON COMMIT, the properties would always have been reported that CHECK ON COMMIT
was off. This has been fixed.
================(Build #3600 - Engineering Case #484889)================
Attempting to unload an UltraLite database while selecting a long list of
tables to unload, would have caused Sybase Central to crash. This has been
fixed.
================(Build #3470 - Engineering Case #462362)================
The Unload wizard could still have tried to connect, even if there was a
failure. If the failure was ignored and the user clicked on Finish again,
Sybase Central could have crashed. This was most likely to have occurred
if unloading from a currently connected database. This problem may also have
occurred with other wizards. These problems have been fixed.
================(Build #3470 - Engineering Case #456562)================
When selecting publications in the Extract Wizard, if one publication was
selected, the appropriate tables were displayed. However if more than one
publication was selected, no tables were displayed. This has been fixed.
================(Build #3906 - Engineering Case #575142)================
Invalid metadata could have been constructed for a UNION DISTINCT operation.
The Interactive SQL utility uses this metadata for display purposes when
connected to an UltraLite database. This could have resulted in abnormal
terminations for statements such as: "SELECT 1 UNION DISTINCT SELECT
2". This has now been corrected.
================(Build #3755 - Engineering Case #541478)================
The table name was not reported for SQLE_PRIMARY_KEY_NOT_UNIQUE when this
error was encountered during a synchronization. This has been fixed.
================(Build #3742 - Engineering Case #540390)================
A comparison between an integer and a BINARY value (in a SQL statement) would
have caused a conversion error, 'Cannot convert numeric to a binary'. This
has been corrected.
================(Build #3689 - Engineering Case #500825)================
Attempting to execute queries where the number of join operations exceeded
15, could have caused the UltraLite runtime to crash. This has been fixed.
================(Build #3677 - Engineering Case #497511)================
The diagnosis of invalid GROUP BY expressions has been enhanced.
================(Build #3662 - Engineering Case #493738)================
Performing an ALTER TABLE statement on a table with blob columns may have
caused corruption in the database. The most likely symptom of this would
have been a crash when selecting from a table that has been altered. This
has now been fixed.
================(Build #3660 - Engineering Case #494710)================
Incorrect results could have been returned when there was an ORDER BY clause
that caused a temporary table to be generated. For this to hace occurred
there must have been a subquery expression in the select list that referred
to a table that could be updated, and the query had to have been potentially
updateable (FOR READ ONLY was not specified).
This was corrected. The work-around is to specify FOR READ ONLY on the query.
================(Build #3592 - Engineering Case #484074)================
A LIKE condition of the form: "column LIKE constant", could have
produced incorrect results when the column was not of type CHAR and occurred
as the first column of an index. This has been corrected.
================(Build #3550 - Engineering Case #478059)================
The UltraLite engine would have leaked process handles at a rate of one per
client process per second. The engine was regularly opening a handle to
each client process to determine if they were still running, but these handles
were not being closed. These handles are now closed.
================(Build #3512 - Engineering Case #470315)================
The default name of a primary key constraint is "primary". If a
table was created with a constraint name that was not the default, Sybase
Central would have crashed silently when navigating to the Data tab.
For example, navigating to the Data tab for the following table definition
would have caused Sybase Central to crash:
CREATE TABLE t1
(
c INTEGER NOT NULL,
CONSTRAINT "cn" PRIMARY KEY("c" ASC)
This problem has now been fixed.
================(Build #3503 - Engineering Case #470282)================
An erroneous conversion error could have been raised when executing a query
if it referenced a derived table which was empty on the right side of a left
outer join, and the derived table contained a GROUP BY clause and had a NUMERIC
item in its SELECT list. This has been fixed.
================(Build #3481 - Engineering Case #466202)================
The maximum number of active SQLCA variables (i.e. SQLCAs that have been
initialized and used to call into the runtime, but not finalized) supported
by the UltaLite engine has been increased from 31 to 63.
For .NET applications, the SQLCA limit also represents the database connection
limit, since a new sqlca is used for each connection. Also, an internal
SQLCA is used by each .NET application, so the effective connection limit
for .NET apps is 63 minus the number of running .NET clients.
Note that the runtime's connection limit is 64.
================(Build #3477 - Engineering Case #465388)================
Inserting a row that contained a zero-length binary value for a long binary
column would have caused the UltraLite engine to crash. This has been fixed.
================(Build #3470 - Engineering Case #462796)================
The ABS() function (absolute value of a numeric expression) did not properly
handle integers with more than 30 digits. This has been corrected.
================(Build #4116 - Engineering Case #638271)================
The methods ResultSet.getTimestamp() and ResultSet.setTimestamp() quietly
manipulated the database timestamp value as UTC. As a result, the javascript
methods Date.toString() and ResultSet.toString() would have reported different
values offset by the timezone difference. These methods now manage timetamps
in localtime relative to ULPOD. Databases with timestamp values stored prior
to this fix might contain values that were UTC based.
================(Build #3946 - Engineering Case #581830)================
The implementations of CreationParms::AddRef and CreationParms::Release,
contained confusing casting. Both methods cast the POD object to a ConnectionParms
object, which has now been fixed.
================(Build #3890 - Engineering Case #568836)================
Incorrect results could have been obtained when using an index which had
nullable columns In some cases, fewer rows were returned than were required.
This has been fixed..
================(Build #3474 - Engineering Case #462642)================
Queries involving 'long varchar' or 'long binary' columns, containing both
null and non-null values, and a temp table, could have caused a crashed in
UltraLite, signaled an error, or produced incorrect results for the 'long'
columns. This has now been fixed.
================(Build #3897 - Engineering Case #561616)================
Application errors could have occurred after opening and closing more than
255 connections. Each .NET connection allocated two SQLCAs, but only one
was freed when the connection was closed. The other would not have been
freed until the connection was garbage collected. This has been fixed.
A workaround for this problem is to call GC.Collect() regularly.
================(Build #3676 - Engineering Case #497458)================
Undefined errors could have occurred if ULDatabaseManager::CreateDatabase()
was called with a null collation. This has been fixed so that a SQLE_INVALID_PARAMETER
ULException will now be thrown for a null collation.
================(Build #3664 - Engineering Case #495578)================
The method ULDataReader.GetBytes() would have returned null if invoked for
a binary(n), or a long binary, column containing an empty string (ie. a zero
length not null value). This has been fixed. GetBytes() will now returns
a zero length array of bytes.
================(Build #3651 - Engineering Case #492148)================
If the Connection.synchronize() function failed with an exception, the message
in the exception did not contain any relevant details. For example, the
text for a SQLE_PRIMARY_KEY_NOT_UNIQUE( -193) error did not include the
table name ("Primary key for table '-n/a-' is not unique."). This
has now been corrected.
================(Build #3628 - Engineering Case #479829)================
Errors (like sticky I/O errors) reported while closing the connection would
have had incomplete error messages (i.e. I/O failed for '-n/a-'). This has
been fixed.
================(Build #3764 - Engineering Case #540349)================
The UltraLite Initialize Database utility (ulinit) would have reported a
syntax error if the reference database contained a foreign key on a table
with the keyword 'name'. Ulinit was failing to quote the table name in the
foreign key statement generator. This has been fixed.
================(Build #3723 - Engineering Case #535586)================
If an index was defined in the reference database as:
create index idx on t(a asc, b asc, c asc)
The UltraLite Initialize Database utility (ulinit) would have created the
index as:
create index idx on t(c asc, b asc, a asc).
reversing the order of the columns. This has been corrected and ulinit will
now create the index in the same order as the reference database.
================(Build #3629 - Engineering Case #478925)================
When using the UltraLite Unload utility to unload an UltraLite database to
SQL Statements, the owner would have been included in the CREATE PUBLICATION
statement. The statement would not habe been valid syntax for UltraLite.
This has been fixed.
================(Build #3574 - Engineering Case #481965)================
When using the following SQLAnywhere options with the reference database:
default_timestamp_increment = 10000
truncate_timestamp_values = 'On'
the UltraLite database produced when running the UltraLite Initialization
utility ulinit on this database would have caused problems when synchronizing.
MobiLink would have complained about timestamp precision mismatches. Ulinit
was not setting the timestamp_increment from the SA default_timestamp_increment
value.
The workaround is to set the timestamp_increment setting on the ulinit command
line, using
the -o keyword=value option, as follows:
ULINIT <existing options> -o timestamp_increment=1000
================(Build #3558 - Engineering Case #479825)================
A number of problems with the UltraLite Database Initialization utility have
been fixed.
Default values were being wrapped in parentheses (), for example DEFAULT
(0), which lead to sytax errors. Default valuess that start (after skipping
white space) with open parentheses "(" and end with close parentheses
")", are now recognized and are automatically stripped.
Specifying the clause DEFAULT getdate(*) also lead to a syntax error. All
occurrences of "(*)" in DEFAULT strings are now replaced with "()".
Previously, only DEFAULT NEWID(*) was being recognized, this change handles
all such functions.
Ulinit was failing to quote table names with leading underscore characters
"--".
The now(), current_timestamp(), and getdate() functions in DEFAULT strings
are now replaced with the string "current timestamp". This is equivalent
in operation and the only syntax that UltraLite supports.
Added quoting to all uses of table names. Specifically the CREATE INDEX
and ALTER TABLE ... ADD FOREIGN KEY statements were problematic.
Ulinit was making use of NVARCHAR data types. nut UltraLite does not support
this data type.
================(Build #3553 - Engineering Case #479032)================
When the UltraLite Synchronization utility's (ulsync) output was redirected
to a file, and sync progress messages were requested with -v, those messages
would not have been written on some patch levels of Windows Vista. Writes
to standard output for the progress messages were being discarded when standard
output of the owning executable (ulsync.exe) was not connected to a console.
This has been fixed by using a callback function to report messages, rather
than writing messages directly to stdout.
================(Build #3550 - Engineering Case #478022)================
Applying Microsoft's XML security patch KB 936181 (MSXML 4.0 dll version
4.20.9848.0) to Windows Vista systems, would have caused the UltraLite Load
utility to crash. This problem does not show up on Windows XP. A work around
has been implemented to prevent the crash.
================(Build #3512 - Engineering Case #472222)================
If the UltraLite language DLLs were removed from the installation, the UltraLite
ODBC driver may have caused Sybase Central and dbisql to crash. The ODBC
driver now explicitly checks for missing resources and reports an error if
no resources are found.
================(Build #3508 - Engineering Case #471825)================
When unloading UltraLite database to SQL, the UltraLite Unload utility would
have missed any tables where IsNeverSynchronized() would have returned true.
This has been corrected
================(Build #3496 - Engineering Case #469570)================
If the default command file for the Listener utility (dblsn,txt) was used
implicitly, then having the -q option in the command file would have had
no effect on the gui, instead of minimizing it. The desired behavior was
achieved if the same command file was used explicitly (i.e. dblsn.exe @dblsn.txt).
This problem has been fixed.
================(Build #3487 - Engineering Case #467502)================
A warning message output by the UltraLite Database Initialization utility
may have been misleading. When column subsets in a table T that were referenced
in a publication PUB, were used to build an UltraLite database, the following
warning was displayed:
ignoring column subset for publication 'PUB', table 'T' -- all columns
will be added
Actually, the column subset was being used to build the UltraLite table,
it's just that columns not in the subset were being properly excluded from
the UltraLite table schema. The message was intended to warn the user in
regard to synchronization publications, as UltraLite always synchronizes
all of the rows of a table that is specified in a sync publication. Part
of the confusion is due to overloading the concept of a publication as a
set of tables plus columns to be included in the schema, with the concept
of a publication as a set of tables to be synchronized. In order to make
this clearer, the warning has been changed to:
ignoring column subset for synchronization publication 'PUB', table 'T'
-- UltraLite synchronizes entire rows"
================(Build #3474 - Engineering Case #464849)================
When a column for an INSERT statement was bound in a Java application as
follows:
stmt.setTimestamp( pnum, new java.sql.Timestamp(System.currentTimeMillis())
);
executing the INSERT statement would have failed with a SQLE_CONVERSION_ERROR.
The microseconds were not scaled into nanoseconds, and vice versa. This has
been fixed.