mevdschee / php-crud-api

Single file PHP script that adds a REST API to a SQL database

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can the API Support Customized Cache Management for Specific Tables?

Ghenimi opened this issue · comments

Hello and thank you for developing this amazing API and dedicating your efforts to it.

I'm curious if the plugin has the capability to handle caching for specific tables. For instance, could it be configured to have no cache for the users table, a 24-hour cache for the content table, and a one-month cache for the site_infos table ?

Thank you for your kind words.

could it be configured to have no cache for the users table, a 24-hour cache for the content table, and a one-month cache for the site_infos table ?

No, this is not supported (yet). Note that the cache only caches the structure and not the data.

Kindest regards Maurits

Thank you so much for your quick response! It's greatly appreciated.

Integrating support for TempFile cache for specific tables and responses would be a fantastic enhancement, potentially boosting the speed and efficiency of the apps using the API. I'm so grateful for your great work.

Integrating support for TempFile cache for specific tables and responses would be a fantastic enhancement

And how would this improve performance measurably? Is your database remote and very slow?

In scenarios where we're building a flat CMS or dealing with content-heavy applications, and experiencing high traffic volumes, caching becomes especially valuable. Articles and static content, which don't change frequently, are prime candidates for caching to optimize performance. On the other hand, tables like comments and user data need to be updated more frequently to reflect real-time interactions and changes within the system.

By implementing a flexible caching mechanism that allows us to selectively cache specific tables, we can tailor the caching strategy to suit the needs of different types of data. This approach ensures that we strike the right balance between performance optimization and data freshness.

In scenarios where we're building a flat CMS or dealing with content-heavy applications, and experiencing high traffic

As a performance engineer I'm very familiar with high traffic applications and static sites. Static site generators should pull the data in (right?), so caching in the API is not needed. Content heavy applications need a local database. With the current speed of disk drives the number of http requests seems more important to me than the number of queries. Especially when those queries are on the localhost and the http requests over the network. Thus my questions about the performance problems you are measuring.

this approach ensures that we strike the right balance between performance optimization and data freshness.

Caching hurts consistency and the acceptable trade-offs are very application specific and related to the bad performance (capacity and high traffic) of specific end-points. Even if we had a generic way to implement this I wouldn't know a generic way to configure this. I also (see point made above) doubt that you are caching at the right level (on a higher level would allow you to be more application/use-case specific).

Since the read operations are HTTP GET requests, you could write a customizationHandler that adds caching headers to the response. This could reduce the actual number of http requests. This would only help if your (ajax?) client would respect those headers. I think this could improve real world performance (at the cost of consistency).

NB: Next thing you'll be asking is implementing cache invalidation on write operations ;-) (joke, not serious)

NB2: Last comment is a reference to the 'hard things' in software engineering, see: https://martinfowler.com/bliki/TwoHardThings.html

you could write a customizationHandler that adds caching headers to the response. This could reduce the actual number of http requests. This would only help if your (ajax?) client would respect those headers. I think this could improve real world performance (at the cost of consistency).

Could you please provide additional clarification on this matter?

NB: Next thing you'll be asking is implementing cache invalidation on write operations ;-) (joke, not serious)

Why not xD

NB2: Last comment is a reference to the 'hard things' in software engineering, see: https://martinfowler.com/bliki/TwoHardThings.html

That's quite impressive!

Could you please provide additional clarification on this matter?

Sure, read more about HTTP caching here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching

Why not xD

Because it is hard (and therefor should be avoided, if possible).

I'll re-open this issue if you actually use the software and have measurable performance problems. I can then help you to analyze and improve (real-world) performance.