# -*- mode: org; -*- * TODO would like to be able to get an optional alist out of query-results e.g. =(( . ) ( . ) ...)= * TODO column information: type, name, etc. given a table; see [[http://codesnippets.joyent.com/posts/show/337]]. can also extract the tables in a given database, etc. this sort of information is static, though, and possibly useless; we need query-specific information, too. we should be able to make this query-specific, though, with =describe!=; cf. name in =CS_DATAFMT=: #+BEGIN_SRC c typedef struct _cs_datafmt { CS_CHAR name[CS_MAX_NAME]; CS_INT namelen; CS_INT datatype; CS_INT format; CS_INT maxlength; CS_INT scale; CS_INT precision; CS_INT status; CS_INT count; CS_INT usertype; CS_LOCALE *locale; } CS_DATAFMT; #+END_SRC * TODO test cases throw in some null strings, too. * TODO =make-{context, connection, etc.}= -> ={context, connect}= * TODO abstract away =context*=? no, let's provide it initially; "power users" may yet want to allocate multiple connections per context? if desirable, we can abstract it away later with a connection struct. on context vs. connection: #+BEGIN_QUOTE The CS-Library routine cs_ctx_alloc allocates a context structure. A context structure is used to store configuration parameters that describe a particular “context,” or operating environment, for a set of server connections. On most platforms, an application can have multiple contexts, although a typical application will need just one. #+END_QUOTE * TODO alist-lib * TODO =translate-type*= -> ={un,}parse-type*= * TODO custom types * TODO timestamp type from [[http://msdn.microsoft.com/en-us/library/aa260631(v=SQL.80).aspx][here]]: #+BEGIN_QUOTE A nonnullable timestamp column is semantically equivalent to a binary(8) column. A nullable timestamp column is semantically equivalent to a varbinary(8) column. #+END_QUOTE * TODO chunking text, images (=ct_get_data=) * TODO varbinary # <> we don't seem to be able to extract the length reliably out of the first byte. check this: #+BEGIN_QUOTE CS_LONGBINARY and CS_VARBINARY do not correspond to any SQL Server datatypes. Specifically, CS_VARBINARY does not correspond to the SQL Server datatype varbinary. #+END_QUOTE CS_VARBINARY != varbinary; and yet it returns this type! it pads nulls on the right; but how do we distinguish between end-of-data and nulls? =ct_data_info=? need some kind of I/O structure for that, though, specific to chunked transfer of text and image. from the [[http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32840.1500/html/ctref/ctref361.htm][blurb]] on numeric: #+BEGIN_QUOTE array is a base-256 representation of the numeric value. The byte at index 0 denotes the sign, where 0 (or a byte value of 00000000) represents a positive number, and 1 (or a byte value of 0000001) represents a negative number. The remaining bytes, 1-n, represent the base-256 number in little-endian order, with the byte at index 1 being the most significant byte. The number of bytes used in array is based on the selected precision of the numeric. Mapping is performed based on the precision of the numeric to the length of array that is used. #+END_QUOTE this looks, indeed, like what's going on with varbinary. check [[http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32840.1500/html/ctref/ctref361.htm][this]] out: #+BEGIN_QUOTE CS_BINARY corresponds to the Adaptive Server types binary and varbinary. That is, Client-Library interprets both the server binary and varbinary types as CS_BINARY. For example, ct_describe returns CS_BINARY_TYPE when describing a result column that has the server datatype varbinary. CS_BINARY is defined as: typedef unsigned char CS_BINARY; Although CS_VARBINARY variables are used to store variable-length values, CS_VARBINARY is considered to be a fixed-length type. This means that an application does not typically need to provide Client-Library with the length of a CS_VARBINARY variable. For example, ct_bind ignores the value of datafmt−>maxlength when binding to a CS_VARBINARY variable. #+END_QUOTE yet another [[http://msdn.microsoft.com/en-us/library/ms188362.aspx][blurb]]: #+BEGIN_QUOTE Binary data types of either fixed length or variable length. binary [ ( n ) ] Fixed-length binary data with a length of n bytes, where n is a value from 1 through 8,000. The storage size is n bytes. varbinary [ ( n | max) ] Variable-length binary data. n can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes. The storage size is the actual length of the data entered + 2 bytes. The data that is entered can be 0 bytes in length. The ANSI SQL synonym for varbinary is binary varying. #+END_QUOTE notice the two extra bytes; weird. indeed: the length of the varbinary is n / 256, where n is varbinary(n). using varbinary(n), i'm able to encode up to 2^8 - 1 = 255, which gets placed in the first spot. with varbinary(2), 2^8 becomes 1 0; 2^0 becomes 0 1. i don't appear to have any way of knowing, however, that the column is two bytes long. length is 1, in the latter case; but varbinary length is 256. in the former, length is 1, varbinary length is 1. 2^8 - 1 is 0 255, as expected. varbinary-length, however, is -256. is there anything in CS_DATAFMT that could be of help: scale or precision, maybe? nope, they're both 0. maybe a conversion is in store? i have the maximum size, which is (* max-length 256); could convert it to normal binary and truncate the left-most 0s. wow: DATALENGTH(varbinary) does the trick; but then i have to submit an extra query every time? is it the case that you have no idea how many 0s are significant? i think so. or is it the case that 256 corresponds to only 4 bytes? * TODO Prepared statements This can be done using "ct_dynamic". We may not want to do this if prepared statements can also be done through SQL (not sure). Also, it introduces additional library complexity for something that's rarely absolutely necessary - the overhead is usually in query exection, not planning. * DONE dynamic queries This referred to parameterizable queries. Real "ct_dynamic" dynamic queries may be useful to do later; this can create actual prepared statements and return a handle to them. * DONE [#A] determining whether data is NULL should we zero the memory first? maybe it doesn't touch it. never mind; see guide, p. 55: #+BEGIN_QUOTE When a row containing NULL values is fetched from a server, Client-Library substitutes specified “null substitution values” for the null columns when copying the row data to program variables. #+BEGIN_EXAMPLE Destination type Null substitution value CS_BINARY_TYPE Empty array CS_VARBINARY_TYPE Empty array CS_BIT_TYPE 0 CS_CHAR_TYPE Empty string CS_VARCHAR_TYPE Empty string CS_DATE_TYPE 4 bytes of zeros CS_DATETIME_TYPE 8 bytes of zeros CS_DATETIME4_TYPE 4 bytes of zeros CS_TINYINT_TYPE 0 CS_SMALLINT_TYPE 0 CS_INT_TYPE 0 CS_DECIMAL_TYPE 0.0 (with default scale and precision) CS_NUMERIC_TYPE 0.0 (with default scale and precision) CS_FLOAT_TYPE 0.0 CS_REAL_TYPE 0.0 CS_MONEY_TYPE $0.0 CS_MONEY4_TYPE $0.0 CS_BOUNDARY_TYPE Empty string CS_SENSITIVITY_TYPE Empty string CS_TEXT_TYPE Empty string CS_TIME_TYPE 4 bytes of zeros CS_IMAGE_TYPE Empty array #+END_EXAMPLE To change null substitution values, an application can call the CS-Library routine =cs_setnull=. #+END_QUOTE postgre has a =PQgetisnull=, by the way; and the egg returns =(sql-null)=. this is top priority now, unfortunately, especially with dates; which throw an error when NULL. * DONE fix the fucking memory problems (unusable?) CLOSED: [2011-04-11 Mon 19:23] - CLOSING NOTE [2011-04-11 Mon 19:24] \\ done? peter bex fixed some incorrect calling semantics. this thing is unusable for anything useful; each =let-location= is possibly suspect. convert them to =malloc/frees=. until then, this whole expirement was a failure; we'll go back to using python wrappers and throw away six (eight?) weeks of work. maybe we really should try =bind=. or is it maybe that i'm misunderstanding the freetds stuff itself and the use of e.g. =cancel= across =send= vs. =connect= vs. =context=? * DONE select database when connecting CLOSED: [2011-02-15 Tue 17:03] #+BEGIN_SRC scheme :tangle test-use.scm :shebang #!/usr/bin/env chicken-scheme (use freetds debug) (include "test-freetds-secret.scm") (call-with-context (lambda (context) (call-with-connection context server username password database (lambda (connection) (call-with-result-set connection "SELECT TOP 256 PhysicianId from ttAccession" (lambda (command) (debug (result-values context connection command)))))))) #+END_SRC * DONE =define-make-type*/type-size/update-type-table!= CLOSED: [2010-12-16 Thu 18:28] - CLOSING NOTE [2010-12-16 Thu 18:28] \\ see [[implicit-table-crap]]. can probably get away with =define-make-type*/type-size=, but what about explicit tables for =type->make-type*=, =type->type-size=, =type->parse-type=, =type->unparse-type=? * DONE module CLOSED: [2011-02-15 Tue 17:05] ** DONE api CLOSED: [2011-02-15 Tue 17:05] connect; close; query -> result; result-{map,fold,for-each}. when processing results, by the way, some kind of cancellation or deallocation should take place in the case of errors, shouldn't it? #+BEGIN_SRC org ,- connect host username password -> connection , - allocates context and connection, returning them in a connection , struct. , - what about database? ,- connection , - context* , - connection* ,- close connection -> void , - does all the context/connection deallocation. ,- query statement . parameters -> result , - allocates command. , - gets as far as =send!=; some sort of result iterator has to handle , binding and end-of-results, etc. , - result structure contains a =CS_COMMAND= pointer. ,- result (command*) ,- result-map result -> list or #!eor ,- result-for-each result -> (void) or #!eor ,- result-fold result -> value or #!eor ,- with-result-set query (lambda (result) ...) -> value ,- fetch-row result -> list or #!eof #+END_SRC the =result-{map, ...}= commands have to call =results!= first and test for =CS_END_RESULTS= (returning =#!eor=); or, in the case of =CS_SUCCEED=, should =bind!= and iteratively =row-fetch= until, say, =#!eod= (end of data). let's start with, say, =result-values=? it can be abstracted as a case of =result-map=, but let's start simple. #+BEGIN_SRC org ,- result-values command* -> ((v00 v01 ... v0m) (v10 v11 ... v1m) , ... (vn0 vn1 ... vnm)) , - let bound-variables (make-bound-variables command*) , - if eor-object? bound-variables , - #!eor , - unfold , - end-of-data? , - cute row-fetch <> bound-variables , - values , - command* ,- make-bound-variables command* -> ((variable0 length0 translate0) ...) , - let result-status results! command* ... , - select result-status , - CS_SUCCEED , - results-info! ... , - CS_END_RESULTS , - command-drop! command* , - #!eor , - CS_FAIL , - (ct_cancel CS_CANCEL_ALL) , - if CS_FAIL, (cs_close CS_FORCE_CLOSE) , - else , - error (should we =command-drop!=?) ,- row-fetch command* bound-variables , - let-location rows-read ... , - let retcode = fetch! command* ... , - select retcode , - CS_SUCCEED CS_ROW_FAIL , - map lambda bound-variable ... , - CS_END_DATA , - # should we return nil here? depends if there are valid , sql statements that return the empty set: update, delete , perhaps? , - #!eod , - CS_FAIL , - (ct_cancel CS_CANCEL_ALL) , - if CS_FAIL, (cs_close CS_FORCE_CLOSE) , - error , - else , - error (should we =command-drop!=?) #+END_SRC fundamental iterator from which map, fold, filter can be built: is it perhaps fold itself? #+BEGIN_EXAMPLE 17:51 < klutometis> i'm looking for a fundamental iterator with which to implement map, for-each, fold, filter, unfold; would fold itself be a reasonable candidate? 17:52 < Riastradh> No, but lambda would be. 17:53 < cky> Riastradh++ 17:53 < klutometis> Riastradh: heh; i think that should become a koan. 17:53 < cky> klutometis: I've once tried implementing fold in terms of unfold, and vice versa. It's actually...very challenging (to me). 17:53 < Riastradh> Maybe foof-loop too. #+END_EXAMPLE *** CANCELED result sets CLOSED: [2011-02-15 Tue 17:05] every command can be associated with multiple buffers: #+BEGIN_QUOTE Result data is returned to an application in the form of a “result set”. A result set includes only a single type of result data. For example, a regular row result set contains only regular rows, and a return parameter result set contains only return parameters. #+END_QUOTE should we ignore result sets, either return the first or last during a map, etc.; require users to worry about them by explicitly iterating through them; or merely support one statement per query? last might be nice; i'd hate to have to think about result sets. some people, on the other hand, find them useful? this is, unfortunately, a fundamental design decision. or could it be, for instance, that result-fold, etc. inaugurate the result-set iteration themselves; relying on the user to call them n times? yes. this will have to be dealt with, however, before any new commands can be sent over the connection: #+BEGIN_QUOTE A connection has pending results if it has not processed all of the results generated by a Client-Library command. Usually, an application cannot send a new command on a connection with pending results. #+END_QUOTE as far as cancelling goes: #+BEGIN_QUOTE - To cancel all remaining results from a command (and eliminate the need to continue calling =ct_results= until it fails to return =CS_SUCCEED=), call ct_cancel with type as =CS_CANCEL_ALL=. - To cancel only the current results, call ct_cancel with type as =CS_CANCEL_CURRENT=. #+END_QUOTE might need a =result-cancel=, therefore, unless we have e.g. =with-result-set=. =with-result-set= sounds reasonable, doesn't it? #+BEGIN_SRC scheme #!/usr/bin/env chicken-scheme (use test) ;;; Taking a suggestion from chandler on #scheme: "You can rely on ;;; (cons #f #f) to evaluate to an object which is not `eq?' to any ;;; other existing object." (define eor-object (cons #f #f)) (define (eor-object? object) (eq? object eor-object)) (set-read-syntax! 'eor (lambda (port) 'eor-object)) (test-assert (eor-object? #!eor)) #+END_SRC on =CS_END_RESULTS=, let's give an =#!eor=; otherwise, return the results or error. ** CANCELED {un,}parsing CLOSED: [2011-02-15 Tue 17:05] see the [[http://wiki.call-cc.org/eggref/4/postgresql#type-conversion][postgresql stuff]] on {un,}parsers; not only can you set the =default-type-{un,}parsers= parameters, you can pass an {un,}parse-table to =connect=. also, postgre has predicates associated with the types; would that allow us to do more sophisticated things, like distinguish between bigints and numerics? no. what about custom types? perhaps. wow, it does allow composite parsers by predicate-composition. it also does a-priori conversion of c-types to blobs, apparently. the postgre connection object apparently contains the {un,}parse-tables; allowing modification after connection. in =query*=, by the way, there's a lot of =memcpy=, =C_malloc= and =free=; interesting: #+BEGIN_SRC scheme ;; This must copy because libpq returns a malloced ptr... #+END_SRC I was going to suggest that unparsers may be superfluous, since =ct_dynamic= takes a =CS_CHAR *= buffer; but it appears as though =ct_param= passes the input value to the statement, which indeed takes an =CS_DATAFMT *= and a =CS_VOID *= as the data. On the other hand, though, if =CS_DATAFMT *= specifies =CS_CHAR_TYPE=, maybe we can get away with string representations; we still need unparsers, though, for srfi-14 dates, currency, etc. * DONE text, image CLOSED: [2010-11-22 Mon 16:25] - CLOSING NOTE [2010-11-22 Mon 16:26] \\ support for text and images without chunking. might need special processing with =ct_get_data=; indeed: i assume that if the length of the text or image is greater than maxlength, it issues a CS_ROW_FAIL and forces chunked retrieval. * DONE money CLOSED: [2010-11-21 Sun 03:42] - CLOSING NOTE [2010-11-21 Sun 03:42] \\ we're going to go int all the way. converting to int throws away the fraction; so does decimal, [[http://msdn.microsoft.com/en-us/library/ms191530.aspx][apparently]]: #+BEGIN_QUOTE When you convert to money from integer data types, units are assumed to be in monetary units. For example, the integer value of 4 is converted to the money equivalent of 4 monetary units. The following example converts smallmoney and money values to varchar and decimal data types, respectively. #+BEGIN_EXAMPLE USE AdventureWorks2008R2; GO DECLARE @mymoney_sm smallmoney; SET @mymoney_sm = 3148.29; SELECT CAST(@mymoney_sm AS varchar) AS 'SM_MONEY varchar'; GO DECLARE @mymoney money; SET @mymoney = 3148.29; SELECT CAST(@mymoney AS decimal) AS 'MONEY DECIMAL'; #+END_EXAMPLE #+BEGIN_EXAMPLE GO Copy USE AdventureWorks2008R2; GO DECLARE @mymoney_sm smallmoney; SET @mymoney_sm = 3148.29; SELECT CAST(@mymoney_sm AS varchar) AS 'SM_MONEY varchar'; GO DECLARE @mymoney money; SET @mymoney = 3148.29; SELECT CAST(@mymoney AS decimal) AS 'MONEY DECIMAL'; GO #+END_EXAMPLE Here is the result set. #+BEGIN_EXAMPLE SM_MONEY VARCHAR ------------------------------ 3148.29 (1 row(s) affected) MONEY DECIMAL ---------------------- 3148 (1 row(s) affected) #+END_EXAMPLE #+END_QUOTE * DONE bigint CLOSED: [2010-11-19 Fri 12:32] - CLOSING NOTE [2010-11-19 Fri 12:32] \\ integer64 =int= is not sufficient to contain it; it's 64-bits. do we have an int64 or a long long? * DONE numeric, decimal CLOSED: [2010-11-22 Mon 16:19] how to convert from the array to a number? #+BEGIN_SRC c typedef struct _cs_numeric { unsigned char precision; unsigned char scale; unsigned char array[CS_MAX_NUMLEN]; } CS_NUMERIC; #+END_SRC =cs_calc= or =cs_convert=, maybe? see the quote under [[varbinary]]; it's a base-256 little-endian with sign in the first position. =cs_convert= was fine with money, for instance; we can do it to float. but do we need to respect precision, somehow? on [[http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.sqlug/html/sqlug/sqlug245.htm][precision vs. scale]]: #+BEGIN_QUOTE - The precision specifies the maximum number of decimal digits that can be stored in the column. It includes all digits to the right and left of the decimal point. You can specify a precision ranging from 1 to 38 digits or use the default precision of 18 digits. - The scale specifies the maximum number of digits that can be stored to the right of the decimal point. The scale must be less than or equal to the precision. You can specify a scale ranging from 0 to 38 digits or use the default scale of 0 digits. #+END_QUOTE ** DONE bigint vs. numeric CLOSED: [2010-11-22 Mon 16:19] we can distinguish between bigint and numeric in that bigint has a scale of 0; in fact, maybe we can treat all scale-zeros as integers. we may not explicitly need to, since (/ (expt 10 0)) also results in an integer. * DONE datatype utilities CLOSED: [2010-11-21 Sun 04:12] manual, p. 2-194: #+BEGIN_QUOTE - cs_calc, which performs arithmetic operations on decimal, money, and numeric datatypes - cs_cmp, which compares datetime, decimal, money, and numeric datatypes - cs_convert, which converts a data value from one datatype to another - cs_dt_crack, which converts a machine readable datetime value into a user-accessible format - cs_dt_info, which sets or retrieves language-specific datetime information - cs_strcmp, which compares two strings #+END_QUOTE * DONE low-level api CLOSED: [2010-11-18 Thu 18:18] - CLOSING NOTE [2010-11-18 Thu 18:18] \\ elements are there in the test. connect, describe, bind, fetch, close, etc. only have to describe once per query, amirite? * DONE unicode CLOSED: [2010-11-18 Thu 18:10] - CLOSING NOTE [2010-11-18 Thu 18:10] \\ fixed by prefixing the string with N! see http://lists.ibiblio.org/pipermail/freetds/2010q2/026029.html not fixed, apparently, even with utf8; is the problem putting it in the database (test.sql), pulling it or both? don't forget the whole issue with locale vs. server locale, etc. shit. * DONE malloc etiquette CLOSED: [2010-11-18 Thu 18:13] - CLOSING NOTE [2010-11-18 Thu 18:13] \\ got rid of the cast; still don't have pointers, though. http://drj11.wordpress.com/2007/04/08/sizeofchar-is-1/ * DONE null-padding CLOSED: [2010-11-18 Thu 18:13] - CLOSING NOTE [2010-11-18 Thu 18:13] \\ is this desirable, btw? seems to result in retrievable binary data * DONE binary CLOSED: [2010-11-18 Thu 18:13] - CLOSING NOTE [2010-11-18 Thu 18:14] \\ binary works; still haven't cracked varbinary. does it matter? the guide claims it's unused. bizarre: the first two bytes of the binary data appear to signify the length of the data. is that why =ex_display_dlen= gives it =(2 * column->maxlength) + 2=? double, because =CS_INT= is double the size of =CS_CHAR=, and they're putting it into a =CS_CHAR= buffer. (we're not, are we?) still, the 2 byte header is interesting; i can't find any documentation about it. (no: =CS_BINARY= is =CS_CHAR=!) (don't read too much into =ex_display_dlen=, by the way; since it has to do with representation not storage; the + 2 on binary could well be the addition of "0x", for all i know.) #+BEGIN_QUOTE CS_VARBINARY does not correspond to any Adaptive Server type, and Open Client routines do not return CS_VARBINARY_TYPE. CS_VARBINARY is a structure that holds a byte array and its length: #+BEGIN_SRC c typedef struct_cs_varybin { CS_SMALLINT len; CS_BYTE array[CS_MAX_CHAR]; } CS_VARBINARY; #+END_SRC CS_VARBINARY is provided so that programmers can write non-C programming language veneers to be written for Open Client. Typical client applications do not use CS_VARBINARY. #+END_QUOTE oh, duh: it's the =CS_VARBINARY= structure in vector-form. =CS_VARCHAR= is similar, apparently. no: this is only because i did an illegitimate cast to VARBINARY on a BINARY column! normally, this header doesn't exist. * DONE bigint, nchar, nvarchar, ntext, boolean, sensitivity, etc. CLOSED: [2010-11-18 Thu 18:15] - CLOSING NOTE [2010-11-18 Thu 18:15] \\ seem to have taken care of these. it appears as though my types-table is incomplete (for MSSQL, at least); see: [[http://msdn.microsoft.com/en-us/library/aa258271(SQL.80).aspx]]. additionally, things like boolean aren't actually supported by MSSQL; also, claims bit is ``Integer data with either a 1 or 0 value.'' should we really do the boolean extrapolation? interesting for adding binary data: [[http://msdn.microsoft.com/en-us/library/ms187403.aspx]] #+BEGIN_SRC c #define CS_ILLEGAL_TYPE TDS_STATIC_CAST(CS_INT, -1) #define CS_CHAR_TYPE TDS_STATIC_CAST(CS_INT, 0) #define CS_BINARY_TYPE TDS_STATIC_CAST(CS_INT, 1) #define CS_LONGCHAR_TYPE TDS_STATIC_CAST(CS_INT, 2) #define CS_LONGBINARY_TYPE TDS_STATIC_CAST(CS_INT, 3) #define CS_TEXT_TYPE TDS_STATIC_CAST(CS_INT, 4) #define CS_IMAGE_TYPE TDS_STATIC_CAST(CS_INT, 5) #define CS_TINYINT_TYPE TDS_STATIC_CAST(CS_INT, 6) #define CS_SMALLINT_TYPE TDS_STATIC_CAST(CS_INT, 7) #define CS_INT_TYPE TDS_STATIC_CAST(CS_INT, 8) #define CS_REAL_TYPE TDS_STATIC_CAST(CS_INT, 9) #define CS_FLOAT_TYPE TDS_STATIC_CAST(CS_INT, 10) #define CS_BIT_TYPE TDS_STATIC_CAST(CS_INT, 11) #define CS_DATETIME_TYPE TDS_STATIC_CAST(CS_INT, 12) #define CS_DATETIME4_TYPE TDS_STATIC_CAST(CS_INT, 13) #define CS_MONEY_TYPE TDS_STATIC_CAST(CS_INT, 14) #define CS_MONEY4_TYPE TDS_STATIC_CAST(CS_INT, 15) #define CS_NUMERIC_TYPE TDS_STATIC_CAST(CS_INT, 16) #define CS_DECIMAL_TYPE TDS_STATIC_CAST(CS_INT, 17) #define CS_VARCHAR_TYPE TDS_STATIC_CAST(CS_INT, 18) #define CS_VARBINARY_TYPE TDS_STATIC_CAST(CS_INT, 19) #define CS_LONG_TYPE TDS_STATIC_CAST(CS_INT, 20) #define CS_SENSITIVITY_TYPE TDS_STATIC_CAST(CS_INT, 21) #define CS_BOUNDARY_TYPE TDS_STATIC_CAST(CS_INT, 22) #define CS_VOID_TYPE TDS_STATIC_CAST(CS_INT, 23) #define CS_USHORT_TYPE TDS_STATIC_CAST(CS_INT, 24) #define CS_UNICHAR_TYPE TDS_STATIC_CAST(CS_INT, 25) #define CS_BLOB_TYPE TDS_STATIC_CAST(CS_INT, 26) #define CS_DATE_TYPE TDS_STATIC_CAST(CS_INT, 27) #define CS_TIME_TYPE TDS_STATIC_CAST(CS_INT, 28) #define CS_UNITEXT_TYPE TDS_STATIC_CAST(CS_INT, 29) #define CS_BIGINT_TYPE TDS_STATIC_CAST(CS_INT, 30) #define CS_USMALLINT_TYPE TDS_STATIC_CAST(CS_INT, 31) #define CS_UINT_TYPE TDS_STATIC_CAST(CS_INT, 32) #define CS_UBIGINT_TYPE TDS_STATIC_CAST(CS_INT, 33) #define CS_XML_TYPE TDS_STATIC_CAST(CS_INT, 34) #define CS_UNIQUE_TYPE TDS_STATIC_CAST(CS_INT, 40) #define CS_USER_TYPE TDS_STATIC_CAST(CS_INT, 100) #+END_SRC * DONE segfault CLOSED: [2010-11-18 Thu 18:15] - CLOSING NOTE [2010-11-18 Thu 18:15] \\ hasn't occurred lately (fingers crossed). should we not let garbage collection do the freeing? should we free explicitly when done? is it inappropriate use of let-location? * DONE we need to do something about char vs. varchar vs. binary CLOSED: [2010-11-18 Thu 18:16] - CLOSING NOTE [2010-11-18 Thu 18:16] \\ null-padding the first should probably be null terminated; the second padded(?); and the third might contain #\nul. we need a mechanism for determining the length of the retrieved data. see =ex_display_dlen=? might need some control of the =datafmt*= at the translator level; i.e. pass it to the translators, too? * DONE TYPE-size CLOSED: [2010-11-16 Tue 17:45] * DONE CS_CHAR*->string CLOSED: [2010-11-16 Tue 13:06] * DONE allocating C structures CLOSED: [2010-11-18 Thu 18:16] - CLOSING NOTE [2010-11-18 Thu 18:16] \\ the make-type* macro-hack http://wiki.call-cc.org/allocating-c-structures-under-control-of-the-chicken-gc http://wiki.call-cc.org/man/4/Data%20representation?action=show - immediate vs. non-immediate - * DONE handle results CLOSED: [2010-11-18 Thu 18:17] From "Program Structure for Processing Results:" #+BEGIN_EXAMPLE while ct_results returns CS_SUCCEED case CS_ROW_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure; end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_CURSOR_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure end if /* For update or delete only: */ if target row is not the row just fetched ct_keydata to specify the target row key end if /* End for update or delete only */ /* To send another cursor command: */ ct_cursor to initiate the cursor command ct_param if command is update of some columns only ct_send to send the command while ct_results returns CS_SUCCEED (...process results...) end while /* End to send another cursor command */ end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_PARAM_RESULT ct_res_info to get the number of parameters for each parameter: ct_describe to get a description of the parameter ct_bind to bind the parameter to a variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row of parameters else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_STATUS_RESULT ct_bind to bind the status to a program variable while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the return status else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_COMPUTE_RESULT (optional: ct_compute_info to get bylist length, bylist, or compute row id) ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable (optional: ct_compute_info to get the compute column id or the aggregate operator for the compute column) end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the compute row else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_MSG_RESULT ct_res_info to get the message id code to handle the message end case case CS_DESCRIBE_RESULT ct_res_info to get the number of columns for each column: ct_describe or ct_dyndesc to get a description end for end case case CS_ROWFMT_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a column description send the information on to the gateway client end for end case case CS_COMPUTEFMT_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a column description (if required: ct_compute_info for compute information end if required) send the information on to the gateway client end for end case case CS_CMD_DONE indicates a command’s results are completely processed end case case CS_CMD_SUCCEED indicates the success of a command that returns no results end case case CS_CMD_FAIL indicates a command failed end case end while switch on ct_results’ final return code case CS_END_RESULTS indicates no more results end case case CS_CANCELED indicates results were canceled end case case CS_FAIL indicates ct_results failed end case end switch #+END_EXAMPLE * DONE sqlite3 CLOSED: [2010-11-18 Thu 18:17] sqlite3 employs cond with =>; (define ((a b . c) d) ...); ad-hoc foreign-lambdas; let-location; and #$. * DONE freetds configuration CLOSED: [2010-11-03 Wed 11:33] - CLOSING NOTE [2010-11-03 Wed 11:33] \\ come to think of it, we have to do the same thing with the python wrapper; let's include a secret that doesn't get checked in. we're setting the name and password hard-codedly; how do we defer to freetds.conf? * DONE naming CLOSED: [2010-11-18 Thu 18:17] there is an inconsistency between constants (which we translate verbosely) and procedures (which we translate literally). by reusing the constants (majuscules and all), we can avoid taking responsibility for the sparseness and avoid translating things. * DONE CS_VERSION_X CLOSED: [2010-11-18 Thu 18:17] - CLOSING NOTE [2010-11-18 Thu 18:17] \\ use a default. should this really be hard-coded; or does freetds handle this according to freetds.conf? there is also the CS_TDS_VERSION connection-property, but I'm not sure if that's the same thing. * DONE cs vs. ct CLOSED: [2010-11-02 Tue 16:25] client-server library vs. client library, apparently: #+BEGIN_QUOTE CS-Library routines start with the prefix “cs.” Client-Library routines start with the prefix “ct”. All Client-Library programs include at least two calls to CS-Library, because they must allocate and drop a context structure. (ctlib guide, p. 18) #+END_QUOTE * CANCELED reimplement in [[http://chicken.wiki.br/easyffi][easyffi]]? CLOSED: [2011-04-11 Mon 19:23] might get rid of the segfaults; or do we still have the problem of memory management? (I suspect we do; but maybe easyffi handles conversion.) [[http://wiki.call-cc.org/eggref/4/bind][bind]] is the new version, apparently; also check out the [[http://wiki.call-cc.org/eggref/4/bind#general-operation][bind-]] stuff. Does some malloc, etc. Not to mention [[http://wiki.call-cc.org/eggref/4/bind#bind-opaque-type][bind-opaque-type]], etc. * CANCELED =let-location= CLOSED: [2011-04-11 Mon 19:24] it seems like we can use =let-location= when an allocation function is provided by the library; otherwise, use the =make-type*= hack. * CANCELED get rid of the implicit table crap in =define-make-type*/type-size= # <> CLOSED: [2010-12-16 Thu 18:27] - CLOSING NOTE [2010-12-16 Thu 18:28] \\ we did worse, and replaced it with three implicit tables: =datatype->make-type*=, =datatype->type-size=, =datatype->translate-type*=. think along the lines of make-type-table, etc. * CANCELED per-type dataformat-configurers CLOSED: [2010-11-18 Thu 18:12] * CANCELED =ct_data_info= CLOSED: [2010-11-18 Thu 18:14] - CLOSING NOTE [2010-11-18 Thu 18:15] \\ vide [[ct_get_data][infra]]. might give us some data on the actual length of values. * CANCELED =ct_get_data= # <> CLOSED: [2010-11-18 Thu 18:14] - CLOSING NOTE [2010-11-18 Thu 18:14] \\ nah; might need this for full support of text and image, though. might it solve the =CS_ROW_FAIL= on binary values? #+BEGIN_QUOTE That is, an application can call =ct_get_data= with a =buflen= of 0, and then call =ct_data_info= to retrieve the descriptor. This technique is useful when an application needs to determine the length of a text or image value before retrieving it. #+END_QUOTE * CANCELED varchar, varbinary CLOSED: [2010-11-18 Thu 18:15] See CS_VARBINARY and CS_VARCHAR; they have their own datatypes. * CANCELED UNICHAR, etc. CLOSED: [2010-11-18 Thu 18:15] - CLOSING NOTE [2010-11-18 Thu 18:16] \\ apparently not need special treatment? * CANCELED error-handling CLOSED: [2010-11-18 Thu 18:16] should we have =allocate-context!=, etc. do implicit error-handling; only to have e.g. =results!= omit it because we actually need the return value? * CANCELED connection structure CLOSED: [2010-11-18 Thu 18:16] may need to allocate it with foreign-safe-lambda; although sqlite3 dispenses with this: #+BEGIN_SRC scheme (define (open-database path) (check-string 'open-database path) (let-location ([db sqlite3:database]) (cond [((foreign-lambda sqlite3:status "sqlite3_open" nonnull-c-string (c-pointer sqlite3:database)) (##sys#expand-home-path path) #$db) => (abort-sqlite3-error 'open-database #f path)] [else db]))) #+END_SRC * CANCELED cursors? CLOSED: [2010-11-18 Thu 18:17] - CLOSING NOTE [2010-11-18 Thu 18:17] \\ not right now #+BEGIN_QUOTE Because relational databases are oriented toward sets, no concept of next row exists, meaning that you cannot operate on an individual row in a set. Cursor functionality solves this problem by letting a result set be processed one row at a time, similar to the way you read and update a file on a disk. A DB-Library cursor indicates the current position in a result set, just as the cursor on your screen indicates the current position in a block of text. #+END_QUOTE vs. browse mode: #+BEGIN_QUOTE Cursors let the user scroll through and update a result set with fewer restrictions than browse mode. #+END_QUOTE not sure if freetds supports them, though; dblib: #+BEGIN_QUOTE dblib cursor dbcursor (same) never dblib cursor dbcursorbind (same) never dblib cursor dbcursorclose (same) never dblib cursor dbcursorcolinfo (same) never dblib cursor dbcursorfetch (same) never dblib cursor dbcursorfetchex n/a never dblib cursor dbcursorinfo (same) never dblib cursor dbcursorinfoex n/a never dblib cursor dbcursoropen (same) never #+END_QUOTE ctlib: #+BEGIN_QUOTE ctlib (all) ct_cursor OK Initiate a Client-Library cursor command. #+END_QUOTE * online sybase manuals [[http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.sdk_12.5.1.ctref/html/ctref/title.htm][reference manual]], [[http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ocs_12.5.1.clcprgde/html/clcprgde/title.htm][programmer's guide]]. * ODBC instead? might be of more general interest; is the more general solution. but we still need freetds as a transport mechanism for communicating with mssql. * postgresql has things like: #+BEGIN_SRC scheme (when (PQisBusy conn-ptr) (thread-wait-for-i/o! conn-fd #:input) (loop)) #+END_SRC which can happen, apparently, even with synchronous calls (at least in dblib). * dblib Client-Library is a library designed to accommodate cursors and other advanced features. [as opposed to DB-Library, etc.] dbfcmd – adds text to the command buffer using sprintf-type formatting. This routine is the same as dbcmd, except that it allows arguments to be substituted into the text. [but does it do escaping?] There are two types of result rows: regular rows and compute rows. Regular rows are generated from columns in a select command’s select list; compute rows are generated from columns in a select command’s compute clause. Since these two types of rows contain very different data, the application must process them separately. If a command batch contains only a single Transact-SQL command and that command does not return rows (for example, a use database command or an insert command), an application does not have to call dbresults to process the results of the command. However, calling dbresults in these situations causes no harm. It may result in easier code maintenance if, after every command, you consistently call dbresults until it returns NO_MORE_RESULTS. If the command batch contains more than one Transact-SQL command, an application must call dbresults once for every command in the batch, whether or not the command returns rows. For this reason, it is recommended that a DB- Library/C application always call dbresults in a loop after sending a command or commands to a server. ** binding [here's where the binding comes in (with escaping, i hope):] The simplest way to get result data is to bind result columns to program variables, using dbbind and dbaltbind. Then, when the application calls dbnextrow to read a result row (see “Reading result rows” on page 19), DB- Library/C will automatically place copies of the columns’ data into the program variables to which they are bound. The application must call dbbind and dbaltbind after a dbresults call but before the first call to dbnextrow. ** buffer [to reiterate the command buffer (why: transactions?):] Note that DB-Library/C processes results one command at a time. When the application has read all the results for one command, it must call dbresults again to set up the results for the next command in the command buffer. To ensure that all results are handled, Sybase strongly recommends that dbresults be called in a loop. ** cancelling [cancelling (as opposed to gathering all the results à la freetds dblib docs):] The following routines cancel results: - dbcancel :: cancels results from the current command batch. This routine cancels all the commands in the current batch. - dbcanquery :: cancels any rows pending from the most recently executed query. ** timeouts By default, DB-Library will wait indefinitely for the results of a server command to arrive. Applications can use the routines below to specify a finite timeout period: - dbsettime :: sets the number of seconds that DB-Library/C will wait for a server response - DBGETTIME :: gets the number of seconds that DB-Library/C will wait for a server response ** buffers to reiterate the processing of command buffers (why: transaction?): Note that DB-Library/C processes results one command at a time. When the application has read all the results for one command, it must call dbresults again to set up the results for the next command in the command buffer. To ensure that all results are handled, Sybase strongly recommends that dbresults be called in a loop. ** messages vs. errors - Server messages and errors, which range in severity from informational messages to fatal errors. Server messages and errors are known to DB- Library/C applications as “messages.” To list all possible Adaptive Server messages, use the Transact-SQL command: #+BEGIN_QUOTE select * from sysmessages #+END_QUOTE - DB-Library/C warnings and errors, known to DB-Library/C applications as “errors.” ** mainline vs. centralized error-checking - Test DB-Library/C routine return codes in the mainline code, handling failures on a case-by-case basis - Centralize message and error handling by installing a message handler and an error handler, which are then automatically called by DB-Library/C when a message or error occurs Note, however, that even an application that uses centralized error and message handling will need some mainline error logic, depending on the nature of the application. To provide a DB-Library/C application with centralized message and error handling, the application programmer must write a message handler and an error handler and install them via =dbmsghandle= and =dberrhandle=. ** browse mode (this is cool) Browse mode provides a means for browsing through database rows and updating their values a row at a time. ** text and image types text and image are Adaptive Server datatypes designed to hold large text or image values. The text datatype will hold up to 2,147,483,647 bytes of printable characters. The image datatype will hold up to 2,147,483,647 bytes of binary data. Because they can be so large, text and image values are not actually stored in database tables. Instead, a pointer to the text or image value is stored in the table. This pointer is called a “text pointer.” ** conversion DB-Library/C supports conversions between most server datatypes with the dbconvert and dbconvert_ps routines. For information on server datatypes, see “Types” on page 408. The dbbind, dbbind_ps, dbaltbind, and dbaltbind_ps routines, which bind result columns to program variables, can also be used to perform type conversion. Those routines each contain a parameter that specifies the datatype of the receiving program variable. If the data being returned from the server is of a different datatype, DB-Library/C will usually convert it automatically to the type specified by the parameter. ** process control flow ** remote procedure call However, a remote procedure call is often more efficient than an execute command. [hmm; thus.] ** datetime and money ** cleanup ** cursors - dbcursor :: Inserts, updates, deletes, locks, or refreshes a particular row in the fetch buffer ah, thus. * ctlib ** manual vs. guide Although there is some introductory material about application development in this manual, it is highly recommended that applications programmers read the Client-Library Programmer’s Guide before designing a Client-Library application. ** initialization ct_init initializes Client-Library. An application calls ct_init after calling cs_ctx_alloc and before calling any other Client-Library routine. ** results Almost all Client-Library programs will process results by using a loop controlled by ct_results. Inside the loop, a switch takes place on the current type of result. Different types of results require different types of processing. For row results, typically the number of columns in the result set is determined and then used to control a loop in which result items are bound to program variables. An application can call ct_res_info to get the number of result columns, but in the example this is not necessary, because exactly two columns were selected. After the result items are bound, ct_fetch is called to fetch data rows until end-of-data. ct_bind binds a result item to a program variable. Binding creates an association between a result item and a program data space. [so binding is the opposite of what i assumed: not for constructing commands, but for binding results to variables; or is the former also available?] ct_fetch fetches result data. In the example, since binding has been specified and the count field in the CS_DATAFMT structure for each column is set to 1, each ct_fetch call copies one row of data into program data space. As each row is fetched, the example program prints it. ** synchronicity Non-asynchronous routines can also return CS_BUSY if called when an asynchronous operation is pending for a connection. [ouch.] ** browse mode Browse mode is included in 10.0 Client-Library in order to provide compatibility with Open Server applications and older Open Client libraries. Its use in new Open Client Client-Library applications is discouraged, because cursors provide the same functionality in a more portable and flexible manner. Further, browse mode is SYBASE-specific and is not suited for use in a heterogeneous environment. ** callbacks Two types of callback events, however, can occur when Client-Library is not reading from the network. These are: - The completion callback event, which occurs when an asynchronous Client-Library routine completes. - The notification callback event, which occurs when an Open Server notification arrives for an application. *** client message callbacks A client message callback must return either: - CS_SUCCEED, to instruct Client-Library to continue any processing that is currently occurring on this connection. - CS_FAIL, to instruct Client-Library to terminate any processing that is currently occurring on this connection. *** completion callbacks A completion callback is called whenever an application receives notice that an asynchronous routine has completed. *** notification callbacks A registered procedure is a type of procedure that is defined and installed in a running Open Server. A Client-Library application can use a remote procedure call command to execute a registered procedure, and can “watch” for a registered procedure to execute. *** server message callbacks An application can handle server error and informational messages in- line, or through a server message callback routine. ** commands 1. Initiate the command by calling ct_command, ct_cursor, or ct_dynamic. 2. Pass parameters for the command (if required). Most applications pass parameters by calling ct_param once for each parameter that the command requires, but it is also possible to pass parameters for a command by using ct_dyndesc. 3. Send the command to the server by calling ct_send. 4. Verify the success of the command by calling ct_results. [sweet: parameters is what i'm talking about (i think); can you "bind" them? "pass"?] *** ct_dynamic "prepared statements," of course; so-called "pre-compiler use"? - The ability to send a command to execute a prepared statement and reference the statement with a unique identifier. A prepared statement is a statement that has been compiled and stored with an identifier as a result of a ct_dynamic(CS_PREPARE) call and a ct_send call. An application typically prepares a statement if it plans to execute the statement multiple times. Variables are particularly useful in dynamic commands because they allow an application to compile a statement once and change the values of the statement’s variables each time it executes the statement. - The ability to describe (with CS_DESCRIBE_OUTPUT) prepared statement output before sending a command to execute the statement. - Less overhead and faster performance than ct_command, if the statement is executed more than once. This benefit is specific to the execution of SQL statements on a SQL Server. All of the above advantages can also be realized using a stored procedure and either language or RPC commands. [fuck that!] ** cursors ** dynamic sql [this is what they call "prepared statements", it seems.] Dynamic SQL is primarily useful for precompiler support, but it can also be used by interactive applications that do either of the following: - Generate SQL statements based on information provided by an end-user - Allow end-users to create whole or partial SQL statements By ANSI definition, a cursor is associated with a single result set, and thus, a single SQL statement. This means that a dynamic SQL prepared statement can only be either: - A SQL select statement or a: - A Transact-SQL execute statement The stored procedure being executed can contain only a single SQL select statement. [does this preclude update, delete, etc.? what the fuck is a transact-sql statement?] SQL Server implements dynamic SQL using temporary stored procedures. A temporary stored procedure is created when a SQL statement is prepared, and destroyed when that prepared statement is de-allocated. De-allocation can occur either explicitly with a ct_dynamic(CS_DEALLOC) call or implicitly when a connection is closed. See the Transact-SQL User’s Guide for a complete discussion of stored procedures. [ah, i see.] There are two ways to dynamically execute SQL statements. One is to perform the prepare and execute operations in one step, and the other is to perform the prepare and execute operations separately. Executing a SQL statement is what actually makes things happen: rows are added by an insert statement, removed by a delete statement, changed by an update statement, or retrieved by a select statement. [nice, i guess it does support that.] *** prepare and execute - Prepare the dynamic SQL statement. - Get a description of prepared statement input, if necessary. - Get a description of prepared statement output, if necessary. - Execute the prepared statement or declare and open a cursor on the prepared statement. - Process results, if necessary. - De-allocate the prepared statement. **** preparing 1. Store the text of the statement in a character string host variable, for example: char *query = “select type, title, price from titles where title_id = ?” The SQL statement may include one or more dynamic parameter markers that act as placeholders. A placeholder is represented by a “?” character. Placeholders can be specified: - For one or more columns in a select list - For one or more values in an insert statement - In the set clause of an update statement - In a where clause of a select or update statement At execution time, the application must substitute a value for each dynamic parameter marker. 2. Call ct_dynamic with type as CS_PREPARE to initiate a command to prepare the statement. To initiate a command to prepare the above SQL statement: ct_dynamic(cmd, CS_PREPARE, “myid”, CS_NULLTERM, query, CS_NULLTERM); To initiate a command to prepare a statement that executes a stored procedure, specify “exec sp_name” as the SQL text, where sp_name is the actual name of the stored procedure to be executed: ct_dynamic(cmd, CS_PREPARE, “myid”, CS_NULLTERM, “exec sp_2”, CS_NULLTERM); 3. Call ct_send to send the command to the server. 4. Call ct_results as necessary to process the results of the command. A successful CS_PREPARE command will generate a CS_CMD_SUCCEED result. **** executing 1. Call ct_dynamic with type as CS_EXECUTE to initiate a command to execute the statement. 2. Define the input values to the SQL statement by performing the following steps for each input value: - Prompt the end-user for an input value. - Call ct_param to pass the input value to the SQL statement. Alternately, if the application is using a dynamic SQL descriptor area, perform these steps for each input value: - Prompt the end-user for an input value. - Call ct_dyndesc with operation as CS_SETATT to put the value into the descriptor area. If the application is using a dynamic SQL descriptor area, then after all the input values have been defined, associate the dynamic SQL descriptor area with the prepared statement: - Call ct_dyndesc with operation as CS_USE_DESC. The input values are substituted for the dynamic parameter markers. 3. Call ct_send to send the command to the server. 4. Call ct_results as necessary to process the results of the command. **** de-allocating 1. If the application used descriptor areas for the prepared statement input and output, de-allocate the descriptor areas by calling dt_dyndesc with operation as CS_DEALLOC once for each descriptor area. 2. Call ct_dynamic with type as CS_DEALLOC to initiate a command to de-allocate the prepared statement. 3. If the application declared and opened a cursor on the prepared statement, call ct_cursor with type as CS_CURSOR_CLOSE and option as CS_DEALLOC to initiate a command both to close and de- allocate the cursor. 4. Call ct_send to send the command to de-allocate the statement. 5. Call ct_results as necessary to process the results of the command. *** vs. stored procedures Because of the numerous restrictions of dynamic SQL, we recommend that applications use stored procedures to accomplish the same tasks. Stored procedures offer identical functionality to dynamic SQL except for the ability to get a description of prepared statement input: creating a stored procedure is analogous to preparing a SQL statement, a stored procedure’s input parameters serve the same purpose as do dynamic parameter markers, and executing a stored procedure is equivalent to executing a prepared statement. [lambda vs. named procedure] ** error handling All Client-Library routines return success or failure indications. It is highly recommended that applications check these return codes. ** sequence At the context level, an application sets up its environment: allocating one or more context structures, setting CS-Library properties for the contexts, initializing Client-Library, and setting Client-Library properties for the contexts. At the connection level, an application connects to a server: allocating one or more connection structures, setting properties for the connections, opening the connections, and setting any server options for the connections. An application can allocate a connection structure only after a context structure has been allocated. At the command level, an application allocates one or more command structures, sends commands, and processes results. An application can allocate a command structure only after a connection structure has been allocated. ** messages ** macros Open Client provides the following macros to help an application decode a Client-Library or CS-Library message number and break it into its four parts so that each component can be displayed separately: - CS_LAYER(msg_number) - identifies the layer reporting the error. - CS_ORIGIN(msg_number) - indicates where the error manifested itself. - CS_SEVERITY(msg_number) - indicates the severity of the error. - CS_NUMBER(msg_number) - identifies the actual layer-specific error number being reported. ** parameters *** null vs. unused Pass NULL and unused pointer parameters as NULL. If the parameter has a NULL value, the length variable associated with the parameter, if any, must be 0 or CS_UNUSED. If the parameter is unused, the length variable associated with the parameter, if any, must be CS_UNUSED. Pass non-pointer unused parameters as CS_UNUSED. *** strings Most string parameters are associated with a parameter that indicates the length of the string. When passing a null-terminated string, an application can pass the length parameter as CS_NULLTERM. When passing a string that is not null-terminated, an application must set the associated length parameter to the length, in bytes, of the string. If a string parameter is NULL the associated length parameter must be 0 or CS_UNUSED. *** item numbers An application can call ct_res_info with type as CS_NUMDATA to get the number of items in the current result set. ** properties Login properties are used when logging into a server. Login properties include CS_USERNAME, CS_PASSWORD, and CS_PACKETSIZE. An application calls ct_config, ct_con_props, and ct_cmd_props to set and retrieve Client-Library properties at the context, connection, and command structure levels, respectively. An application calls cs_config to set and retrieve CS-Library context properties. ** results - Regular row results - Cursor row results - Parameter results - Stored procedure return status results - Compute row results - Message results - Describe results - Format results An application processes results by calling ct_results, which indicates the type of result available by setting *result_type. ct_results sets *result_type to CS_CMD_DONE to indicate that the results of a “logical command” have been completely processed. A logical command is generally considered to be any Open Client command defined via ct_command, ct_dynamic, or ct_cursor. Some commands, for example a language command containing a Transact-SQL update statement, do not generate results. ct_results sets *result_type to CS_CMD_SUCCEED or CS_CMD_FAIL to indicate the status of a command that does not return results. *** processing #+BEGIN_EXAMPLE while ct_results returns CS_SUCCEED case CS_ROW_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure; end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_CURSOR_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure end if /* For update or delete only: */ if target row is not the row just fetched ct_keydata to specify the target row key end if /* End for update or delete only */ /* To send another cursor command: */ ct_cursor to initiate the cursor command ct_param if command is update of some columns only ct_send to send the command while ct_results returns CS_SUCCEED (...process results...) end while /* End to send another cursor command */ end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_PARAM_RESULT ct_res_info to get the number of parameters for each parameter: ct_describe to get a description of the parameter ct_bind to bind the parameter to a variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row of parameters else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_STATUS_RESULT ct_bind to bind the status to a program variable while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the return status else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_COMPUTE_RESULT (optional: ct_compute_info to get bylist length, bylist, or compute row id) ct_res_info to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable (optional: ct_compute_info to get the compute column id or the aggregate operator for the compute column) end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the compute row else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case case CS_MSG_RESULT ct_res_info to get the message id code to handle the message end case case CS_DESCRIBE_RESULT ct_res_info to get the number of columns for each column: ct_describe or ct_dyndesc to get a description end for end case case CS_ROWFMT_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a column description send the information on to the gateway client end for end case case CS_COMPUTEFMT_RESULT ct_res_info to get the number of columns for each column: ct_describe to get a column description (if required: ct_compute_info for compute information end if required) send the information on to the gateway client end for end case case CS_CMD_DONE indicates a command’s results are completely processed end case case CS_CMD_SUCCEED indicates the success of a command that returns no results end case case CS_CMD_FAIL indicates a command failed end case end while switch on ct_results’ final return code case CS_END_RESULTS indicates no more results end case case CS_CANCELED indicates results were canceled end case case CS_FAIL indicates ct_results failed end case end switch #+END_EXAMPLE *** value When processing a result set, there are three ways for an application to retrieve a result item’s value: - It can call ct_bind to associate a result item with a program variable. When the program calls ct_fetch to fetch a result row, the item’s value is automatically copied into the associated program variable. Most applications will use this method for all result items except large text or image values. - It can call ct_get_data to retrieve a result item’s value in chunks. After calling ct_fetch to fetch the row, the application calls ct_get_data in a loop. Each ct_get_data call retrieves a chunk of the result item’s value. Most application will use ct_get_data only to retrieve large text or image values. - It can call ct_dyndesc to retrieve result item descriptions and values. An application calls ct_dyndesc once for each result item, after calling ct_fetch to fetch the row. Typical applications will not use ct_dyndesc, which is intended for precompiler support. ** types * ctlib guide ** applications *** interpreter - ct_command(CS_LANG_CMD) to define a language command and its text - ct_send to send it to the server - ct_results to read the results - ct_res_info and ct_describe to find out column formats - ct_bind and ct_fetch to retrieve rows *** data entry - ct_command(CS_RPC_CMD) to define an RPC command - ct_param or ct_setparam to define parameter values with which to call the procedure - ct_send to send the command to the server - ct_results, ct_bind, ct_fetch, and so forth, to read the results *** interactive [we might be doing this with the library; but also the first, too.] - Prepare the statement, by sending a ct_dynamic(CS_PREPARE) command and handling the results - Query for parameter formats, by sending a ct_dynamic(CS_DESCRIBE_INPUT) command and handling the results - After prompting for input values, execute the statement by sending a ct_dynamic(CS_EXECUTE) command and handling the results ** example 1. Set up the Client-Library programming environment: 1. Use cs_ctx_alloc to allocate a context structure. 2. Use cs_config to set any CS-Library properties for the context. 3. Use ct_init to initialize Client-Library. 4. Use ct_config to set Client-Library properties for the context. 2. Define error handling. Most applications use callback routines to handle errors: 1. Use cs_config(CS_MESSAGE_CB) to install a CS-Library error callback. 2. Use ct_callback to install a client message callback. 3. Use ct_callback to install a server message callback. 3. Connect to a server: 1. Use ct_con_alloc to allocate a connection structure. 2. Use ct_con_props to set any properties in the connection structure 3. Use ct_connect to open a connection to a server. 4. Use ct_options to set any server options for this connection. 4. Send a language command to the server: 1. Use ct_cmd_alloc to allocate a command structure. 2. Use ct_command to initiate a language command. 3. Use ct_send to send the command. 5. Process the results of the command: 1. Use ct_results to set up results for processing (called in a loop). 2. Use ct_res_info to get information about a result set. 3. Use ct_describe to get information about a result item. 4. Use ct_bind to bind a result item to program data space. 5. Use ct_fetch to fetch result rows (called in a loop). 6. Finish: 1. Use ct_cmd_drop to deallocate the command structure. 2. Use ct_close to close the connection with the server. 3. Use ct_exit to exit Client-Library. 4. Use cs_ctx_drop to deallocate the context structure. ** initializing client library *** external configuration As an alternative to setting properties with hard-coded ct_config calls, Client- Library allows external configuration of property values for applications that have been configured to use this feature. ** connect For a complete list of connection properties, see the ct_con_props reference page in the Open Client Client-Library/C Reference Manual. ** send commands ** process the results of a command #+BEGIN_EXAMPLE while ct_results returns CS_SUCCEED switch on result_type case row results for each column: ct_bind end for while ct_fetch is returning rows process each row end while check ct_fetch’s final return code end case row results case command done .... case command failed .... case other result type.... ... raise an error ... end switch end while check ct_results’ final return code #+END_EXAMPLE ** command rules Within a connection, the results of a command must be completely processed before another command can be sent. The exception to this rule is a ct_cursor (CS_CURSOR_OPEN) command, which generates a cursor result set. ** types Client-Library supports a wide range of datatypes, which are shared with CS-Library and Server-Library. In most cases, they correspond directly to Adaptive Server datatypes. [most cases?] cstypes.h There are two reasons why you should use Open Client/Server datatypes in your application rather than the native C datatypes: heterogeneous architecture, and portability of application code. For this reason, always use the correct CS_TYPEDEF to declare any variable that holds data to be sent to the server or read from the results of a server command. *** type constants Type constants are symbolic values that identify the datatype of a program variable. Many CS-Library, Client-Library, and Server-Library routines take the address of a program variable as a CS_VOID * parameter. Type constants are required to identify the datatype when passing CS_VOID * parameters. Typically, a type constant is passed to a routine as the datatype field of a CS_DATAFMT structure. ** errors and messages *** CS_EXTRA_INF The CS_EXTRA_INF property determines whether or not Client-Library returns certain kinds of informational messages, such as the number of rows affected by a command. *** sequencing long messages Message callback routines and ct_diag return Client-Library and server messages in CS_CLIENTMSG and CS_SERVERMSG structures. In the CS_CLIENTMSG structure, the message text is stored in the msgstring field. In the CS_SERVERMSG structure, the message text is stored in the text field. Both msgstring and text are CS_MAX_MSG bytes long. If a message longer than CS_MAX_MSG - 1 bytes is generated, Client- Library’s default behavior is to truncate the message. However, an application can use the CS_NO_TRUNCATE property to instruct Client-Library to “sequence” long messages instead of truncating them. *** extended error data On receiving this message, the application must identify the problem column or columns to the end user so that the user can readily correct them. This information is also available in the text of the duplicate key message, but an application must parse the text to extract the column names. For information about how to identify and process extended error data, see the “Error and Message Handling” topics page in the Open Client Client- Library/C Reference Manual. ** command types The following types of commands can take parameters: - A language command, when the command text contains variables - An RPC command, when the stored procedure takes parameters - A cursor-declare command, when the body of the cursor contains host language parameters - A cursor-open command, when the body of the cursor contains host language parameters - A message command - A dynamic SQL execute command An application calls ct_param or ct_setparam once for each parameter that a command requires. These routines perform the same function, except that ct_param copies a parameter value, while ct_setparam copies the address of a variable that contains the value. If ct_setparam is used, Client-Library reads the parameter value when the command is sent. *** processing Each time a command is sent, the application must process or cancel the results. A typical application calls ct_results until it returns a value other than CS_SUCCEED. *** language commands Language commands for Adaptive Server must be written in Transact-SQL. [that's not MSSQL, though.] Your application initiates a language command by calling ct_command with type as CS_LANG_CMD and *buffer as the language text. Language commands can take parameters. For Adaptive Server client applications, parameter placement is indicated by undeclared variables in the command text. For example, a language command such as the one below takes a parameter whose value is substituted for “@state_name”: select au_lname, city from pubs2..authors \ where state = @state_name" [again, not MSSQL.] *** cursors Use Client-Library cursors when you want to process two or more commands at the same time while using only one server connection. [only use case?] ** results-handling *** types - Regular row results – rows returned when the server processes a select statement. - Cursor row results – rows returned when the server processes a ct_cursor Client-Library cursor-open command. - Parameter results – fetchable data that can represent: - Output values for an Adaptive Server stored procedure’s return parameters - Output values for an Open Server registered procedure’s return parameters - A new timestamp value for an updated text/image column (seen only when processing the results of a ct_command send-data command) - A new timestamp value for a row that was updated with a language command containing a browse-mode update statement - Stored procedure return status results – the return value from an Adaptive Server stored procedure or Open Server registered procedure. - Compute row results –intermediate rows returned when the server processes a select statement with a compute by clause. - Message results – a message ID returned by an Open Server application’s message command handler while processing the results of a message command. - Describe results – informational results that describe the format of a prepared dynamic SQL statement’s input parameters or result columns. - Format results – informational results used by Open Server gateway applications to retrieve regular row and compute row formats before the actual data arrives. A single command can generate more than one type of result. For example, a language command that executes a stored procedure can generate multiple regular row and compute row result sets, a parameter result set, and a return status result set. For this reason, it is important that you code applications to handle all types of results that a server can generate. *** loop #+BEGIN_EXAMPLE while ct_results returns CS_SUCCEED (optional) ct_res_info to get current command number switch on result_type /* ** Values of result_type that indicate ** fetchable results: */ case CS_COMPUTE_RESULT... case CS_CURSOR_RESULT... case CS_PARAM_RESULT... case CS_ROW_RESULT... case CS_STATUS_RESULT... /* ** Values of result_type that indicate ** non-fetchable results: */ case CS_COMPUTEFMT_RESULT... case CS_MSG_RESULT... case CS_ROWFMT_RESULT... case CS_DESCRIBE_RESULT... /* ** Other values of result_type: */ case CS_CMD_DONE... (optional) ct_res_info to get the number of rows affected by the current command case CS_CMD_FAIL... case CS_CMD_SUCCEED... end switch end while switch on ct_results’ final return code case CS_END_RESULTS... case CS_CANCELED... case CS_FAIL... end switch #+END_EXAMPLE **** row result #+BEGIN_EXAMPLE case CS_ROW_RESULT ct_res_info(CS_NUMDATA) to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case #+END_EXAMPLE **** cursor result #+BEGIN_EXAMPLE case CS_CURSOR_RESULT ct_res_info(CS_NUMDATA) to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL and cursor has not been closed if CS_SUCCEED process the row else if CS_ROW_FAIL handle the row failure end if /* For update or delete only: */ if target row is not the row just fetched ct_keydata to specify the target row key end if /* End for update or delete only */ /* To send a nested cursor update, delete, or close command: */ ct_cursor to initiate the cursor command /* For updates/deletes whose “where” clause contains variables */ ct_param or ct_setparam for each parameter /* End for updates/deletes whose ... */ ct_send to send the command while ct_results returns CS_SUCCEED (...process results...) end while /* End to send a nested cursor command */ end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch if cursor was closed break out of outer ct_results loop end if end case #+END_EXAMPLE **** parameter result #+BEGIN_EXAMPLE case CS_PARAM_RESULT ct_res_info(CS_NUMDATA) to get the number of parameters for each parameter: ct_describe to get a description of the parameter ct_bind to bind the parameter to a variable end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the row of parameters else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case #+END_EXAMPLE **** return status #+BEGIN_EXAMPLE case CS_STATUS_RESULT ct_bind to bind the status to a program variable while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the return status else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case #+END_EXAMPLE **** compute results (adaptive server bullshit?) #+BEGIN_EXAMPLE case CS_COMPUTE_RESULT (optional)ct_compute_info to get bylist length, bylist, or compute row id ct_res_info(CS_NUMDATA) to get the number of columns for each column: ct_describe to get a description of the column ct_bind to bind the column to a program variable (optional: ct_compute_info to get the compute column id or the aggregate operator for the compute column) end for while ct_fetch returns CS_SUCCEED or CS_ROW_FAIL if CS_SUCCEED process the compute row else if CS_ROW_FAIL handle the failure end if end while switch on ct_fetch’s final return code case CS_END_DATA... case CS_CANCELED... case CS_FAIL... end switch end case #+END_EXAMPLE **** message results #+BEGIN_EXAMPLE case CS_MSG_RESULT ct_res_info to get the message ID code to handle the message ID end case #+END_EXAMPLE **** describe results #+BEGIN_EXAMPLE case CS_DESCRIBE_RESULT ct_res_info to get the number of columns for each column: ct_describe or ct_dyndesc to get a description end for end case #+END_EXAMPLE *** result_type as command status - CS_CMD_DONE – indicates that the results of a logical command have been completely processed. See “Logical commands” on page 101 for an explanation of this term. - CS_CMD_SUCCEED – indicates the success of a command that returns no data, such as a Transact-SQL insert or delete command. - CS_CMD_FAIL – indicates that, due to error, the server failed to execute a server command. For example, the text of a language command might contain a syntax error or refer to a nonexistent object. Because a Client-Library command can execute multiple server commands, an application must either: - Continue to call ct_results to process results generated by any other server commands contained in the original Client-Library command, or - Call ct_cancel(CS_CANCEL_ALL) to cancel the Client-Library command and discard its results. Logical commands and Client-Library commands are not equivalent. A Client- Library command can execute multiple logical commands on the server, for example, a stored procedure can execute multiple select statements that return data, and each such statement represents one logical command. A logical command can generate one or more result sets; for example, a select statement can return multiple regular-row and compute results sets. [and that, mein freunds: why we don't do proprietary software.] *** ct_results final return code - CS_END_RESULT – indicates a normal loop exit. - CS_CANCELED – indicates that results were canceled: ct_cancel(CS_CANCEL_ALL) or ct_cancel(CS_CANCEL_ATTN) was called while processing results. - CS_FAIL – indicates a serious client-side or network error, such as a communication failure or a memory shortage. ** dynamic commands *** prepare and execute 1. Prepare the dynamic SQL statement. - ct_dynamic(CS_PREPARE) - ct_send - ct_results, in a loop 2. (Optional) Get a description of the parameters required to execute the prepared statement. - ct_dynamic(CS_DESCRIBE_INPUT) - ct_send - ct_results, in a loop ct_results returns with a result_type of CS_DESCRIBE_RESULT to indicate that the parameter descriptions are available. 3. (Optional) Get a description of the result columns returned by the prepared statement. - ct_dynamic(CS_DESCRIBE_OUTPUT) - ct_send - ct_results, in a loop ct_results returns with a result_type of CS_DESCRIBE_RESULT to indicate that the description is available. 4. Execute the prepared statement or declare and open a cursor on the prepared statement. To execute the prepared statement (without a cursor): - ct_dynamic(CS_EXECUTE). - If necessary, define parameter values with ct_param, ct_setparam, ct_dyndesc, or ct_dynsqlda. - ct_send. - ct_results, in a loop. Fetchable results may require processing. For a description of how to execute a prepared statement with a cursor, see “Using Client-Library cursors” on page 109. 5. Deallocate the prepared statement. If a cursor is declared on the statement, first close and deallocate the cursor: - ct_cursor(CS_CURSOR_CLOSE, CS_DEALLOC) or, if the cursor is not open, ct_cursor(CS_CURSOR_DEALLOC) ct_send - ct_results, in a loop - Initiate and send a command to deallocate the prepared statement: - ct_dynamic(CS_DEALLOC) - ct_send - ct_results, in a loop The deallocate command returns no fetchable results. * ffi see this shit, too: http://wiki.call-cc.org/man/4/Unit%20lolevel#foreign-pointers also: http://wiki.call-cc.org/man/4/C%20interface ** foreign-safe-lambda [syntax] (foreign-safe-lambda RETURNTYPE NAME ARGTYPE ...) This is similar to foreign-lambda, but also allows the called function to call Scheme functions and allocate Scheme data-objects. See Callbacks.