Main   Products   Offshore Outsourcing   Customers   Partners   ContactUs  
JDBC Databases
  HXTT Access v7.1
  HXTT Cobol v5.0
  HXTT DBF v7.1
 
  Buy Now
  Support
  Download
  Document
  FAQ
  HXTT Excel v6.1
  HXTT Paradox v7.1
  HXTT PDF v2.0
  HXTT Text(CSV) v7.1
  HXTT Word v1.1
  HXTT XML v4.0
Offshore Outsourcing
Free Resources
  Firewall Tunneling
  Search Indexing Robot
  Conditional Compilation
  Password Recovery for MS Access
  Password Recovery for Corel Paradox
  Checksum Tool for MD5
  Character Set Converter
  Pyramid - Poker of ZYH
   
   
   
Heng Xing Tian Tai Lab of Xi'an City (abbr, HXTT)

HXTT DBF
update DBF takes different time
hws
2012-07-24 18:44:44.0
clipper DBF file UPIOMAS.DBF AND index UPIOMAS1.NTX
UPIOMAS.DBF contains 31 Columns,180000 records, file size 34.2MB
UPIOMAS1.NTX contains column : DRUMNO+FLOW_SEQ, file size 3.5MB

FOR EXAMPLLE execute SQL(JAVA and WebSphere):

UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120725' , TIME_IN='084058' , EMPL_IN='80132' , NUTNO='Q0N0M1400CG137 T03A' , LOTNO='127008572' , STATUS='C' , DATE_OP='20120725' , TIME_OP='084058' , EMPL_OP='80132' where DRUMNO='173146' and FLOW_SEQ='07'

If I have 10 records for different DRUMNO to update dbf file,first record takes 1 seconds, when reach third records,it takes 60 seconds, and then forth record takes 1 second,fifth record takes 1 second.....normal

And sometimes, first record takes 60 seconds, second record takes 1 record, third record takes 1 second....normal

I want to solve the problem why it tables 60 seconds. I am confused.


Re:update DBF takes different time
HXTT Support
2012-07-24 19:14:11.0
>UPIOMAS1.NTX contains column : DRUMNO+FLOW_SEQ, file size 3.5MB
>where DRUMNO='173146' and FLOW_SEQ='07'
HXTT DBF can utilize that index to quicken your update if you use "Incremental Index"=NTX connection property or you have an UPIOMAS.CGP file which contains UPIOMAS1.NTX .
Re:update DBF takes different time
HXTT Support
2012-07-24 19:17:18.0
>I want to solve the problem why it tables 60 seconds. I am confused.
Maybe your NTX is not balance, and many rows need to be updated sometimes, or that table is closed just now. BTW, you can use PreparedStatement to run where DRUMNO=? and FLOW_SEQ=? .
Re:update DBF takes different time
hws
2012-07-25 02:39:51.0
UPIOMAS.CGP is contained file name UPIOMAS1.NTX before.
so, it should be balanced.

PreparedStatement maybe a good suggestion.

but

A java program continus updates 20 records only.
One of the records takes much more time (60 seconds), and the other 19 records just takes 1 seconds each.
The total time = 1*60 + 19*1 = 79, the performance is not good, users said it.
In my opinion,it should take 20*1=20 seconds only. Users will say it is good.

so, One of the records takes much more time (60 seconds), it is not normal.




Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-07-25 02:51:27.0
>so, One of the records takes much more time (60 seconds), it is not normal.
Reindex all on UPIOMAS will rebuild balance tree.
where DRUMNO='173146' and FLOW_SEQ='07' can changed into DRUMNO+FLOW_SEQ='17314607'
Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-07-25 03:25:52.0
YES
It update all of the records correctly.

Why just only one record take 60 seconds??? maybe the first record.
If the SQL is not good, all of the 20 records will take 20*60 seconds.
NOT jsut only one record.

Should I forget something?

Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-07-25 03:35:34.0
What's your jdbc url? Whether you're using Xbase compatible connection property?
Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-07-25 03:51:51.0
Some of the java code below:

private static final String driver="com.hxtt.sql.dbf.DBFDriver";
private String dbURL="jdbc:DBF:/H:/UPM/UPMDAT"; //H is the NOVELL disk driver


this.set_dbURL(dbf_path);
Class.forName(driver);
con=DriverManager.getConnection(dbURL);
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-07-25 03:54:48.0
//H is the NOVELL disk driver
Then maybe sometimes it meet network traffic issue.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-07-25 04:02:57.0
network traffic issue?
I check it tomorrow(sorry,I will be off and dinner).
Thanks you anyway, and let you know the conclusion whether network traffic issue or not.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-07-25 23:31:13.0
The network traffic was checked and fine because both of the server are on the backbone.

In my opinion,WebSphere JVM is checked and DBF file is not record locked(Nobody access the DBF file).

The best time is about 20 seconds, more or less.

Should I modify parameters be modified for records updated quicklly?
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-07-26 00:04:33.0
No idea now. Please consider renewing your license to upgrade the latest version, since your version is staying on 2007.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-08 00:19:51.0
I delete some records for UPIOMAS.DBF and the records is about 130000 records.(180000 records before)

In such a condition, it works normally.
In my opinion, the RECCOUNT(records of the DBF file) will cause the problem.

But,I can't reduce the records from 180000 to 130000 for users.

Any suggestions???


Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-08 02:03:28.0
>But,I can't reduce the records from 180000 to 130000 for users.
You can try "pack table UPIOMAS;" after you backup your data.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-08 02:11:36.0
YES.
I packed the DBF file before.
So,the status of the records are not deleted.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-08 02:22:25.0
>In my opinion, the RECCOUNT(records of the DBF file) will cause the problem.
>But,I can't reduce the records from 180000 to 130000 for users.
maxCacheSize: Indicates the max memory utilization for per table on automatic temporary index or matched result cache. You can use 16~65536 kilo bytes. Default: 1024.
You can try a bigger maxCacheSize value.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-08 03:28:23.0
I have allready set the properties following.
properties.setProperty("versionNumber","04");
properties.setProperty("locktype","CLIPPER");
properties.setProperty("blockSize","128");
properties.setProperty("tmpdir","_memory_");
properties.setProperty("maxCacheSize","65536");
con=DriverManager.getConnection(dbURL,properties);
It is NOT useful.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-08 03:43:03.0
>properties.setProperty("locktype","CLIPPER");
locktype will be ignored. You should use lockType=CLIPPER connection property, if you make sure there's a Clipper application is using the same UPIOMAS.dbf file.

Because your version is too old, and you should consider to download the latest evaluation version to see whether your issue can disappear in your test environment.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-09 00:36:04.0
lockType=CLIPPER ??
I got some error as following, that's I modified it to properties.setProperty("locktype","CLIPPER");
other issue:DBF_JDBC40.jar can't be used by jdk 1.2 (for WebSphere 3.5.7)

ERROR MESSAGE :
java.lang.reflect.InvocationTargetException: java.lang.NoClassDefFoundError: com/hxtt/concurrent/JNIFile
at com.hxtt.concurrent.ad.(Unknown Source)
at com.hxtt.concurrent.t.do(Unknown Source)
at com.hxtt.concurrent.t.if(Unknown Source)
at com.hxtt.concurrent.t.a(Unknown Source)
at com.hxtt.sql.dbf.i.long(Unknown Source)
at com.hxtt.sql.dbf.i.goto(Unknown Source)
at com.hxtt.sql.dbf.d.a(Unknown Source)
at com.hxtt.sql.dbf.d.(Unknown Source)
at com.hxtt.sql.dbf.u.a(Unknown Source)
at com.hxtt.sql.bm.if(Unknown Source)
at com.hxtt.sql.de.a(Unknown Source)
at com.hxtt.sql.de.a(Unknown Source)
at com.hxtt.sql.bm.a(Unknown Source)
at com.hxtt.sql.bm.a(Unknown Source)
at com.hxtt.sql.ag.a(Unknown Source)
at com.hxtt.sql.dg.executeQuery(Unknown Source)
at com.sanshing.uo.bean.uoaurounb.insertData(Unkno
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-09 00:42:58.0
>other issue:DBF_JDBC40.jar can't be used by jdk 1.2 (for WebSphere 3.5.7)
You can download DBF_JDBC20.jar .
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-09 01:05:17.0
I got DBF_JDBC30.jar to run ERP system.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-09 01:14:16.0
You can use create table testdata select * from UPIOMAS where false; to get a blank table file, then email us it.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-09 01:42:45.0
I mailed all of the UPIOMAS files to support@hxtt.com.
It contains 3 file, UPIOMAS.DBF UPIOMAS.NTX UPIOMAS.CGP.
UPIOMAS.DBF contains 234638 records and file size is about 44.5MB.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-10 00:35:13.0
Haven't received it still. Maybe it's too big so that has been blocked?
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-10 00:45:15.0
I remail the file again and compressed it to UPIOMAS.rar.
It is about 5MB.
Maybe,the file size is the limitation and blocked.
Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-10 03:14:09.0
Tested on local disk.
First sql:select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='173146' and FLOW_SEQ='07';
First use driver spent time/row count:390ms/1
Second sql:select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='151142' and FLOW_SEQ='12';
Second use driver spent time/row count:0ms/1
Third sql:select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='150665' and FLOW_SEQ='04';
Third use driver spent time/row count:0ms/1


package test.jdbc.dbf;

import java.sql.*;
import java.util.Properties;

public class testDBFSpeed46{
public static void main(String argv[]){
try{
Class.forName("com.hxtt.sql.dbf.DBFDriver");


String url="jdbc:dbf:/d:/test/dbf";

Properties properties=new Properties();
// properties.put("delayedClose",0+"");


long currentTime=System.currentTimeMillis();
Connection con = DriverManager.getConnection(url,properties);
Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
String sql="select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='173146' and FLOW_SEQ='07';";
int rowCount=0;
ResultSet rs=stmt.executeQuery(sql);
while(rs.next()){
rowCount++;
}
rs.close();
stmt.close();
con.close();
currentTime=System.currentTimeMillis()-currentTime;
System.out.println("First sql:" +sql);
System.out.println("First use driver spent time/row count:" +currentTime+"ms/"+rowCount );


currentTime=System.currentTimeMillis();
con = DriverManager.getConnection(url,properties);
sql="select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='151142' and FLOW_SEQ='12';";
stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
rowCount=0;
rs=stmt.executeQuery(sql);
while(rs.next()){
rowCount++;
}
rs.close();
stmt.close();
con.close();
currentTime=System.currentTimeMillis()-currentTime;
System.out.println("Second sql:" +sql);
System.out.println("Second use driver spent time/row count:" +currentTime+"ms/"+rowCount );

currentTime=System.currentTimeMillis();
con = DriverManager.getConnection(url,properties);
sql="select place,DATE_IN,TIME_IN,DRUMNO,FLOW_SEQ,EMPL_IN from UPIOMAS where DRUMNO='150665' and FLOW_SEQ='04';";
stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
rowCount=0;
rs=stmt.executeQuery(sql);
while(rs.next()){
rowCount++;
}
rs.close();
stmt.close();
con.close();
currentTime=System.currentTimeMillis()-currentTime;
System.out.println("Third sql:" +sql);
System.out.println("Third use driver spent time/row count:" +currentTime+"ms/"+rowCount );

}
catch( SQLException sqle )
{
do
{
System.out.println(sqle.getMessage());
System.out.println("Error Code:"+sqle.getErrorCode());
System.out.println("SQL State:"+sqle.getSQLState());
sqle.printStackTrace();
}while((sqle=sqle.getNextException())!=null);
}
catch( Exception e )
{
System.out.println(e.getMessage());
e.printStackTrace();
}
}
}

Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-12 17:22:31.0
It is good performances for READ or SELECT records in my user production.
It maybe a problem for UPDATE 10 records or more.
for example above:
con.setAutoCommit(false);
//Try to update more than 10 data records for different DRUMNOs for testing
UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120725' , TIME_IN='084058' , EMPL_IN='80132' , NUTNO='Q0N0M1400CG137 T03A' , LOTNO='127008572' , STATUS='C' , DATE_OP='20120725' , TIME_OP='084058' , EMPL_OP='80132' where DRUMNO='173146' and FLOW_SEQ='07'
con.commit();

It delays on first record sometimes.
or in other situation,
It delays on third record sometimes.
And just only one record it delays, other records will be fine or updated on performances.

Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
hws
2012-08-12 17:49:12.0
I mailed some of the logs to support@hxtt.com.
I update records for DB2 and DBF.
line 13,update first record, it taked 30 seconds, it's the best time.
and other 4 records(line 29,46,63,80) just taked less than 1 record.

some of the source code as following...and de318.logs will write the log file.
sql is above for UPDATE....

de318.logs("Normal","[程式名稱]:uoautupob[方法]:insertData[狀態]:9.異動桶號流程檔"+sql);
int r = stmt_upmdat.executeUpdate(sql);
l_find = (r>0) ? true :false;
de318.logs("Normal","[程式名稱]:uoautupob[方法]:insertData[狀態]:9.異動桶號流程檔執行結果:"+l_find); //2011/10/13HWS USER反應執行很久,可能此SQL異動DBF太花時間

Re:Re:Re:Re:Re:Re:Re:Re:update DBF takes different time
HXTT Support
2012-08-12 18:42:07.0
Tested with test laest version. You can try it with your older version too.
sql:SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B011' ORDER BY flow_seq ASC;
spent time/row count:1985ms/12
sql:SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B015' ORDER BY flow_seq ASC;
spent time/row count:0ms/0
sql:SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B012' ORDER BY flow_seq ASC;
spent time/row count:15ms/12
sql:SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B013' ORDER BY flow_seq ASC;
spent time/row count:0ms/0
sql:SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B014' ORDER BY flow_seq ASC;
spent time/row count:0ms/0
Commit spent time:2219ms


package test.jdbc.dbf;

import java.sql.*;
import java.util.Properties;

public class testDBFSpeed52{
public static void main(String argv[]){
try{
Class.forName("com.hxtt.sql.dbf.DBFDriver");


String url="jdbc:dbf:/d:/test/dbf";

Properties properties=new Properties();
// properties.put("delayedClose",0+"");


long currentTime=System.currentTimeMillis();
Connection con = DriverManager.getConnection(url,properties);

con.setAutoCommit(false);

test(con,"SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B011' ORDER BY flow_seq ASC;",
"UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120810' , TIME_IN='135900' , EMPL_IN='80132' , NUTNO='Q3PMM120100H02 T00A' , LOTNO='T28002301' , STATUS='C' , DATE_OP='20120810' , TIME_OP='143556' , EMPL_OP='80132' where DRUMNO='18B011' and FLOW_SEQ='04';");

test(con,"SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B015' ORDER BY flow_seq ASC;",
"UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120810' , TIME_IN='135900' , EMPL_IN='80132' , NUTNO='Q3PMM120100H02 T00A' , LOTNO='T28002301' , STATUS='C' , DATE_OP='20120810' , TIME_OP='143556' , EMPL_OP='80132' where DRUMNO='18B015' and FLOW_SEQ='04';");

test(con,"SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B012' ORDER BY flow_seq ASC;",
"UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120810' , TIME_IN='135900' , EMPL_IN='80132' , NUTNO='Q3PMM120100H02 T00A' , LOTNO='T28002301' , STATUS='C' , DATE_OP='20120810' , TIME_OP='143556' , EMPL_OP='80132' where DRUMNO='18B012' and FLOW_SEQ='04';");

test(con,"SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B013' ORDER BY flow_seq ASC;",
"UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120810' , TIME_IN='135900' , EMPL_IN='80132' , NUTNO='Q3PMM120100H02 T00A' , LOTNO='T28002301' , STATUS='C' , DATE_OP='20120810' , TIME_OP='143556' , EMPL_OP='80132' where DRUMNO='18B013' and FLOW_SEQ='04';");

test(con,"SELECT FLOW_NO,FLOW_SEQ,DATE_OP,PLACE,DATE_IN,TIME_IN,EMPL_IN FROM UPIOMAS WHERE drumno = '18B014' ORDER BY flow_seq ASC;",
"UPDATE UPIOMAS SET place='E1E00' , DATE_IN='20120810' , TIME_IN='135900' , EMPL_IN='80132' , NUTNO='Q3PMM120100H02 T00A' , LOTNO='T28002301' , STATUS='C' , DATE_OP='20120810' , TIME_OP='143556' , EMPL_OP='80132' where DRUMNO='18B014' and FLOW_SEQ='04';");

currentTime=System.currentTimeMillis()-currentTime;
con.commit();
System.out.println("Commit spent time:" +currentTime+"ms");
con.close();
}
catch( SQLException sqle )
{
do
{
System.out.println(sqle.getMessage());
System.out.println("Error Code:"+sqle.getErrorCode());
System.out.println("SQL State:"+sqle.getSQLState());
sqle.printStackTrace();
}while((sqle=sqle.getNextException())!=null);
}
catch( Exception e )
{
System.out.println(e.getMessage());
e.printStackTrace();
}
}

private static void test(Connection con,
String selectSQL, String updateSQL)throws SQLException{
long currentTime=System.currentTimeMillis();
Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
int rowCount=0;
ResultSet rs=stmt.executeQuery(selectSQL);
while(rs.next()){
rowCount++;
}
rs.close();

if(rowCount>0)
stmt.executeUpdate(updateSQL);

stmt.close();
currentTime=System.currentTimeMillis()-currentTime;
System.out.println("sql:" +selectSQL);
System.out.println(" spent time/row count:" +currentTime+"ms/"+rowCount );
}
}


Search Key   Search by Last 50 Questions




Google
 

Email: webmaster@hxtt.com
Copyright © 2003-2019 Heng Xing Tian Tai Lab of Xi'an City. | All Rights Reserved. | Privacy | Legal | Sitemap