Its a common observation while using csv files periodic import, xml parsed data insertion in database etc that in the starting few months the import was perfectly well but with the passage of time the code efficiency decreases as the number of newly added records get smaller and smaller. This is mostly observed where duplicated records are not get inserted in the database. Following are the two potential reasons,
i) Initially there are few number of records and the uniqueness factor for new records is high which go on decreasing with the insertion of more records on periodic basis. Duplicated records are one of the reason for inserting lesser records.
ii) While importing every new item it checks in the corresponding table(s) the existence of that item. When there are few records in the database, cross-checking is limited which increases exponentially as per the number of records already there in the database. Time out break will be there when every new import search through all the records, thus decreasing the number of new items imported.
Additionally, the most common issue with the cron jobs or periodic importer is the cross checking every new item has to do before insertion.
It is suggested to delete the old records in order to keep the size of table feasible for crosschecking while inserting every new record.