Wednesday, March 7, 2012

reporting/archiving database?

Hi all,
We currently have a production database fast approaching about 15GB. It
stores all types of data including transactional records, image columns for
scanned documents etc etc. In addition to our production applications using
this database, all our reports have been written to run of this database.
Thus far, users of the production system have had the luxury of pulling up
transactions since inception and they like having that flexibility. We have
about 3 years worth of data currently in the database. For the past several
months we have been contemplating whether we should setup a seperate
reporting/archive database for all reports to be run off. We would archive
all records older than 1 year to this database. We would then have to change
all reports to point to this database and also update our production app
with some custom logic to search this second database if the record was not
found in the primary database.
Is this the correct best practices way to proceed? If yes, how would we go
about setting this up? Replication? I know with replication it will simply
replicate all the records over to the secondary database. So, if I then
delete something from the primary, will it also delete from the secondary?
We dont want that right?
Any insight, help, tips, guidelines would be much appreciated.
TIA!15GB is small. Some would even call it tiny.
How does your system perform today?
You should be able to scale to a much larger database than that and still
have good performance (assuming that you are running on decent hardware, and
have an efficient database design).
Keith Kratochvil
<param@.community.nospam> wrote in message
news:ec4kXNGcGHA.3472@.TK2MSFTNGP02.phx.gbl...
> Hi all,
> We currently have a production database fast approaching about 15GB. It
> stores all types of data including transactional records, image columns
> for scanned documents etc etc. In addition to our production applications
> using this database, all our reports have been written to run of this
> database. Thus far, users of the production system have had the luxury of
> pulling up transactions since inception and they like having that
> flexibility. We have about 3 years worth of data currently in the
> database. For the past several months we have been contemplating whether
> we should setup a seperate reporting/archive database for all reports to
> be run off. We would archive all records older than 1 year to this
> database. We would then have to change all reports to point to this
> database and also update our production app with some custom logic to
> search this second database if the record was not found in the primary
> database.
> Is this the correct best practices way to proceed? If yes, how would we go
> about setting this up? Replication? I know with replication it will simply
> replicate all the records over to the secondary database. So, if I then
> delete something from the primary, will it also delete from the secondary?
> We dont want that right?
> Any insight, help, tips, guidelines would be much appreciated.
> TIA!
>
>|||The performance is getting slower day by day. Long reports are timing out.
My guess is lack of good indexing. However, for raw OLTP environment, I have
heard some people saying that too many indexes is not good as well. Any
recommendations?
"Keith Kratochvil" <sqlguy.back2u@.comcast.net> wrote in message
news:ec6GDaHcGHA.3712@.TK2MSFTNGP03.phx.gbl...
> 15GB is small. Some would even call it tiny.
> How does your system perform today?
> You should be able to scale to a much larger database than that and still
> have good performance (assuming that you are running on decent hardware,
> and have an efficient database design).
> --
> Keith Kratochvil
>
> <param@.community.nospam> wrote in message
> news:ec4kXNGcGHA.3472@.TK2MSFTNGP02.phx.gbl...
>|||People say that too many indexes is bad for OLTP because (this is my
interpretation) every time to make modifications to a table, the
indexes associated to that table have to be updated. This can be a
performance hit if there are a lot of indexes and if there are many
modifications made (like in a OLTP environment). Plenty of indexes is
fine for a data warehouse style database because your standard user
only queries this database.
Archiving data records isn't a bad idea, it's something I've considered
doing when the DBs I handle get too large. Sometimes it's not feasible
to make changes to a production database (e.g. add/remove indexes) but
if it is an option available to you then explore that too. You can use
the Index Tuning Wizard and SQL Profiler traces as a starting point (do
a Profiler trace for a day and then load it into the Wizard to see what
it's recommendation is).
Have you considered doing a hardware upgrade (scaling up)? It's more of
a band-aid solution and not always feasible (money etc).
But, if you want to setup a reporting server, here are a few things
I've learnt in the past that might (or might not) be helpful:
What you said in your first post about replication is pretty much true.
I suspect you'd want to use transactional or snapshot replication and
not merge since you'd have a reporting (basically read-only) and
production SQL Server. If you use replication then changes on the
production SQL Server will also be made on the reporting server. Same
with log shipping (if you use it).
Something you could do is set up a seperate server with a database that
has the exact same schema as your production database. You can then
define DTS (SQL Server 2000) or Integration Services (SQL Server 2005)
packages to transfer data from the production to the new/reporting
server and then remove the data from the production server.
Schedule the execution of the packages for a time of low load
(midnight, late at night or very early in the morning are generally
good choices but it depends on your situation) so that users don't
experience a performance hit.|||Too many indexes can hinder performance, but that is not a reason to avoid
them. You need to balance performance of your insert statements with the
performance of the SELECT/UPDATE/DELETE statements.
Have you updated statistics?
Have you tried running Profiler to see which queries/stored procedures take
the most amount of time? Perhaps you could create indexes and tune up 10%
of your queries and achieve a significant improvement in performance.
Keith Kratochvil
<param@.community.nospam> wrote in message
news:OJcMu$JcGHA.3840@.TK2MSFTNGP04.phx.gbl...
> The performance is getting slower day by day. Long reports are timing out.
> My guess is lack of good indexing. However, for raw OLTP environment, I
> have heard some people saying that too many indexes is not good as well.
> Any recommendations?
> "Keith Kratochvil" <sqlguy.back2u@.comcast.net> wrote in message
> news:ec6GDaHcGHA.3712@.TK2MSFTNGP03.phx.gbl...
>

No comments:

Post a Comment