What remains constant during the recovery of data in Database 12c?

The cdb remains open throughout there’s incomplete recovery this is a fascinating one to watch the way it does incomplete recovery is based on the auxiliary database technique if you do 10.3 looks at the scripts arm and generates they’re really instructive to study is really clever as you watch what it does as it does or less assuming it succeeds and finally flashback yes as an experiment, not the end shut it down so it’s now ten minutes to the hour thank you for hanging on with me we’ve done four chapters that’s brilliant comments are shaved oh I have a question, hmm ok it’s regarding iron man and with like a single-tenant database do would you expect people to generally do the arm and back up of you know the root container database to take care or what most people be taking a backup of just that pluggable if single tenant I would definitely do it from the root now you want your backup to be the complete database okay.

I guess notionally maybe you could save space fun no you would you do the whole thing you do a recovery stamp and there’d be no point in doing this any other way, mmm it will be different of course in true multi-tenant you might then have different service level agreements yeah perhaps your tests and dev databases you just do a backup once a week production, of course, you’ll be backing up twice a day yeah so you might want different standards that is there a way to do a back up with the container database but kind of not but ignore some of the pluggable that can make sense yes there is you can certainly do that the syntax lets you and we don’t actually have an example of us right you can do I’m back up pluggable database there you can nominate which one now I think that takes a comma-separated list, oh ok I think it does so and when you think about it all it’s doing it masakit pluggable database.

So is just a shortcut to a list of tablespaces wish from the root they’re all visible and I find with all this if you get the architecture straight questions out that you can normally sort of work it out can’t you you know if I think all the PDB is just such a tablespaces into service and if you think of that once I thought of it that way that was when it began to fall into place for me or an hour man always works own files so exactly so knows nothing about the data dictionary yeah that’s a good point it knows nothing about the complexities of what’s happening of the dictionary the clever stuff it just knows about files yeah it’s a good way of putting it okay um v it’s two lights too much of the exercise now um all right let’s see cloning okay when it comes to this takes a special on to an interesting question that ours waiting for one of you to ask me which is what type of data place should you consider running in this environment.

Now I’ve been happily saying consolidate 20 databases into one CDV think about what those databases are doing if you’ve got a database where the bottleneck for performance is redo generation would you want to plug it in to a CD be know exactly because you’ve only got one instance one database in that particular example one log writer one redo log so if you have a Saturday no basis that are really stressed out and redo generation if you consolidate them into one the results will be disastrous so when do you use this the size isn’t so important which is what you ask my official the size of the thing really shouldn’t matter you know it’s not going to take long to clone a container you’re just copying data files the size isn’t significance but the activity within that database may well be significant if you’ve got a database where its incidence requires the full resources it requires a full resources of the instance you know it’s going to be a really bad move to consolidate it with several others.

You’ll have to put it into its own tdb as a single talent yeah and then you’re not going to have any problem it’s a clone 150-gigabyte database nothing it’s going to be really fast yeah depending on your hardware and if it twenty terabyte database is going to take a lot longer and twenty terabyte databases well then you think what are they it’s probably a data warehouse and if it’s your data warehouse well you won’t have a problem with redo but you need to think about competing for parallel execution service for example which as discussed ten minutes ago you configure the pool of parallel execution servers in the roots but then what you’re going to do with the individual containers so you’ve got to think about what type of database is a valid candidate for it and typically its databases that don’t require the full resource’s resource of the instance they’re running in yep.

So you can safely consolidate because there’s only one database only one instance but you mentioned to make go ahead and make him pluggable but with a single tenant in that case, because then it’s going to be deprecated otherwise exactly and you still get the benefit of the fast pattering yeah of course. You get a load of extra complication if you choose to make it that way by creating common users as I said I hope inclined to avoid that certainly I can’t imagine right but what I really love about it it really should be completely transparent to the users to the programmers they shouldn’t know you’ve done this they really shouldn’t. I think tuning is going to be a bit of an issue you can generate things like statspack and WR reports per container and so you can see how they’re comparing before and after consolidation, you can do that and you can also generate reports from the roots they WR the seeming pub license as well so you can tune them to a large extent independently.

Now but it’s going to be a pretty diff it’s a big busy database you’re going to be thinking long and hard about whether to do this what we will see on thursday is how the resource manager is very important it is possible i’ll just mention briefly now it is possible to restrict the impacts of activity in one container on another container the resource manager can do that and that is important ok that takes us to the end of the day and but we haven’t done the exercise if the textbook has got through to you yet if the textbook has reached you chapter 3 of the textbook gives in some ways a nice description of multi-tenant so if your textbook has got to you read chapter 3 and come back to me tomorrow with what you think about it because the author of the man of textbook does describe things rather differently from me though a couple of things I don’t agree with and so if you’ve got the book read chapter three and we’ll talk about it tomorrow if you haven’t well you have to read it to work but otherwise.

Let’s do the exercise first thing tomorrow morning so and if you want to do it now by all means do what I’ll aim to do is aim to start the next chapter I’d say half past the hour or a bit later if you need more time to think to complete the exercise that it’s been a good day and thanks to hanging on we’ve don’t four chapters pretty damn good so exercise I’ll start the class session at nine o’clock tomorrow morning nine o’clock eastern time and if you have already done the exercise great but otherwise let’s aim to start the next chapter half-past nine quarter to 10 ish something like that so basically exercise the next chapter at half past the hour if that’s ok at that point so starting recording them to do the recovery of one container we have to restore the data dictionary of the roots as it was at that time we have to restore the undo tablespace as it was of that at that time because at that time they might remain incomplete transactions.

So what is doing here is working out which tablespaces might have undue segments is going to take back the data dictionary of the root container and the undo tablespace and it has to restore them to a separate directory because of course, it can’t overwrite the live tablespaces because my database is open my other 19 containers are doing work so resource those two to that temporary space then it builds an instance in memory on Thursday we’re going to see this happening again when we use another more advanced style and procedure are man process is building an auxiliary instance called an eye just a name picked around them and is building it of these parameters the parameters are the default if you don’t like them you can change them you create what we call an auxiliary parameter file which again will see on Thursday but in principle the defaults are probably going to work you might want to reduce the memory for instance and notice sets the Oracle manch files to the directory nominated.

They’re just building an instance in memory and at this point, I just gripped out the P mourns and their this it’s just an absolutely normal instance just building instance in memory and having done that it can now do a bit more work it’s going to restore control file as off that’s system change number so and then it can mount us so it’s restored a clone of the control file and mounts it onto the auxiliary instance having done that you can then do more work data files auxiliary sets of data files it now knows having got the control file what it really needs it’s got to do things in stages because our mam can’t understand the data dictionary well it knows about his files so it’s got to mount the clone database switch over to its restore to needs and the auxiliary set is going to be the undo tablespace and the root containers system tablespace and then it’s going to do the recovery which you finally does here so it’s restoring all of the root container systems undo this orcs user.

I don’t know why it restores users a camping white needs users I can’t think why it needs this orc now technically I think it probably only needs the system and undo but for whatever reason is pulling back the entire root container to that temporary location and then it can do the recovery so you can recover the long wave files of p DBA to the point in time and it’s extracting the archive logs doing the business and at the end, it kills itself and deletes the data files and it’s a really amazingly clever thing and having done that you just open the pluggable container you have to open it with reset logs it doesn’t reset the logs, of course, it doesn’t do it but it’s a syntactic requirement that that goes through it’s a very clever mechanism and then I went back to my container logged on again and there’s my table with 37 rose it’s an amazingly clever mechanism amazingly clever right.

Leave a Comment