Showing posts with label copyextensions. Show all posts
Showing posts with label copyextensions. Show all posts

Monday, March 19, 2012

ODBC API Question

Does anyone know if there's a way, using SQL Server ODBC Bulk Copy
extensions (bcp_bind, bcp_sendrow, bcp_batch, etc.) to determine exactly
which row/rows were not imported if an error occurs (like a Violation of
Primary Key, etc.) With SQLBulkOperations it can be done with a status
array, but I don't like the fact that I would have to read the entire table
into memory and iterate it using SQLFetch/SQLFetchScroll to update/insert
new rows using SQLBulkOperations. Any ideas appreciated.
"Mike C#" <xyz@.xyz.com> wrote in message
news:uK2jd9i5GHA.3444@.TK2MSFTNGP02.phx.gbl...
> Does anyone know if there's a way, using SQL Server ODBC Bulk
> Copy extensions (bcp_bind, bcp_sendrow, bcp_batch, etc.) to
> determine exactly which row/rows were not imported if an error
> occurs (like a Violation of Primary Key, etc.) With
> SQLBulkOperations it can be done with a status array, but I
> don't like the fact that I would have to read the entire table
> into memory and iterate it using SQLFetch/SQLFetchScroll to
> update/insert new rows using SQLBulkOperations. Any ideas
> appreciated.
We have the same problem and haven't found a reasonable solution.
- Arnie
|||"Arnie" <99yoda@.newsgroup.nospam> wrote in message
news:%23AwERXv5GHA.5108@.TK2MSFTNGP03.phx.gbl...
> We have the same problem and haven't found a reasonable solution.
Aggravating, isn't it? I made a little app that tried to limit the rows
required and fetch-scroll through them 2,000 at a time. For instance, if I
have a file with ID #s for each record in the file like 1, 2, 10, 100, 4000,
9999, then I only need to pull over the rows in the table with ID #s between
1 and 9999. It breaks down when I have a huge spread, like if the file only
had two rows, for ID #s 1 and 9999. Then I'm pulling 9997 rows across that
I don't need.
I am also playing with SELECTing only x (maybe 2,000?) rows at a time into
the cursor. As I go through the file above, I pull rows with ID #s 1 -
2000, then I pull 4000 - 5999 and finally jump up to 9999 - 11998 and that's
it. I'm still trying to find the "sweet-spot" where I'll get the best
efficiency for the fewest SELECTs. It seems like it is working and should
be pretty efficient as long as the flat file is sorted properly, but it adds
to the complexity by about a zillion percent, espcially if your primary key
is composed of multiple columns populated from the file...

ODBC API Question

Does anyone know if there's a way, using SQL Server ODBC Bulk Copy
extensions (bcp_bind, bcp_sendrow, bcp_batch, etc.) to determine exactly
which row/rows were not imported if an error occurs (like a Violation of
Primary Key, etc.) With SQLBulkOperations it can be done with a status
array, but I don't like the fact that I would have to read the entire table
into memory and iterate it using SQLFetch/SQLFetchScroll to update/insert
new rows using SQLBulkOperations. Any ideas appreciated."Mike C#" <xyz@.xyz.com> wrote in message
news:uK2jd9i5GHA.3444@.TK2MSFTNGP02.phx.gbl...
> Does anyone know if there's a way, using SQL Server ODBC Bulk
> Copy extensions (bcp_bind, bcp_sendrow, bcp_batch, etc.) to
> determine exactly which row/rows were not imported if an error
> occurs (like a Violation of Primary Key, etc.) With
> SQLBulkOperations it can be done with a status array, but I
> don't like the fact that I would have to read the entire table
> into memory and iterate it using SQLFetch/SQLFetchScroll to
> update/insert new rows using SQLBulkOperations. Any ideas
> appreciated.
We have the same problem and haven't found a reasonable solution.
- Arnie|||"Arnie" <99yoda@.newsgroup.nospam> wrote in message
news:%23AwERXv5GHA.5108@.TK2MSFTNGP03.phx.gbl...
> We have the same problem and haven't found a reasonable solution.
Aggravating, isn't it? I made a little app that tried to limit the rows
required and fetch-scroll through them 2,000 at a time. For instance, if I
have a file with ID #s for each record in the file like 1, 2, 10, 100, 4000,
9999, then I only need to pull over the rows in the table with ID #s between
1 and 9999. It breaks down when I have a huge spread, like if the file only
had two rows, for ID #s 1 and 9999. Then I'm pulling 9997 rows across that
I don't need.
I am also playing with SELECTing only x (maybe 2,000?) rows at a time into
the cursor. As I go through the file above, I pull rows with ID #s 1 -
2000, then I pull 4000 - 5999 and finally jump up to 9999 - 11998 and that's
it. I'm still trying to find the "sweet-spot" where I'll get the best
efficiency for the fewest SELECTs. It seems like it is working and should
be pretty efficient as long as the flat file is sorted properly, but it adds
to the complexity by about a zillion percent, espcially if your primary key
is composed of multiple columns populated from the file...