1Z0-062 | All About Downloadable 1Z0-062 vce


Q71. Examine the following query output: 

You issue the following command to import tables into the hr schema: 

$ > impdp hr/hr directory = dumpdir dumpfile = hr_new.dmp schemas=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING: Y 

Which statement is true? 

A. All database operations performed by the impdp command are logged. 

B. Only CREATE INDEX and CREATE TABLE statements generated by the import are logged. 

C. Only CREATE TABLE and ALTER TABLE statements generated by the import are logged. 

D. None of the operations against the master table used by Oracle Data Pump to coordinate its activities are logged. 

Answer:

Explanation: Oracle Data Pump disable redo logging when loading data into tables and when creating indexes. The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. The Master Table is used to track the detailed progress information of a Data Pump job. The Master Table is created in the schema of the current user running the Pump Dump export or import, and it keeps tracks of lots of detailed information. 

Q72. Which three statements are true concerning unplugging a pluggable database (PDB)? 

A. The PDB must be open in read only mode. 

B. The PDB must be dosed. 

C. The unplugged PDB becomes a non-CDB. 

D. The unplugged PDB can be plugged into the same multitenant container database (CDB) 

E. The unplugged PDB can be plugged into another CDB. 

F. The PDB data files are automatically removed from disk. 

Answer: B,D,E 

Explanation: B, not A: The PDB must be closed before unplugging it. 

D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way. There is no supported way to handle the conversion of such columns automatically. This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference. 

E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so that the PDB’s datafiles can remain in place. 

Reference: Oracle White Paper, Oracle Multitenant 

Q73. Which four actions are possible during an Online Data file Move operation? 

A. Creating and dropping tables in the data file being moved 

B. Performing file shrink of the data file being moved 

C. Querying tables in the data file being moved 

D. Performing Block Media Recovery for a data block in the data file being moved 

E. Flashing back the database 

F. Executing DML statements on objects stored in the data file being moved 

Answer: A,C,E,F 

Explanation: - You can now move On line Datafile without hove to stop Monoged Recovery and manually copy and rename Files. This can even be used to move Datafiles from or to ASM. 

-New in Oracle Database 12c: FROM METAUNK. Physical Standby Database is in Active Data Guard Mode (opened READ ONLY and Managed Recovery is running): It is now possible to online move a Datafile while Managed Recovery is running, ie. the Physical Standby Database is in Active Data Guard Mode. You con use this Command to move the Datafile 

-A flashback operation does not relocate a moved data file to its previous location. If you move a data file online from one location to another and later flash back the database to a point in time before the move, then the Data file remains in the new location, but the contents of the Data file ore changed to the contents at the time specified in the flashback. Oracle0 Database Administrator's Guide 12c Release 1 (12.1) 

Q74. You executed a DROP USER CASCADE on an Oracle 11g release 1 database and immediately realized that you forgot to copy the OCA.EXAM_RESULTS table to the OCP schema. 

The RECYCLE_BIN enabled before the DROP USER was executed and the OCP user has been granted the FLASHBACK ANY TABLE system privilege. 

What is the quickest way to recover the contents of the OCA.EXAM_RESULTS table to the OCP schema? 

A. Execute FLASHBACK TABLE OCA.EXAM_RESULTS TO BEFORE DROP RENAME TO OCP.EXAM_RESULTS; connected as SYSTEM. 

B. Recover the table using traditional Tablespace Point In Time Recovery. 

C. Recover the table using Automated Tablespace Point In Time Recovery. 

D. Recovery the table using Database Point In Time Recovery. 

E. Execute FLASHBACK TABLE OCA.EXAM_RESULTS TO BEFORE DROP RENAME TO EXAM_RESULTS; connected as the OCP user. 

Answer:

Explanation: RMAN tablespace point-in-time recovery (TSPITR). 

Recovery Manager (RMAN) TSPITR enables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database. 

Fully Automated (the default) 

In this mode, RMAN manages the entire TSPITR process including the auxiliary instance. 

You specify the tablespaces of the recovery set, an auxiliary destination, the target time, and you allow RMAN to manage all other aspects of TSPITR. 

The default mode is recommended unless you specifically need more control over the location of recovery set files after TSPITR, auxiliary set files during TSPITR, channel settings and parameters or some other aspect of your auxiliary instance. 

Q75. Examine this command: 

SQL > exec DBMS_STATS.SET_TABLE_PREFS (‘SH’, ‘CUSTOMERS’, ‘PUBLISH’, ‘false’); 

Which three statements are true about the effect of this command? 

A. Statistics collection is not done for the CUSTOMERS table when schema stats are gathered. 

B. Statistics collection is not done for the CUSTOMERS table when database stats are gathered. 

C. Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse time. 

D. Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as pending statistics. 

E. Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as pending statistics. 

Answer: C,D,E 

Explanation: * SET_TABLE_PREFS Procedure 

This procedure is used to set the statistics preferences of the specified table in the specified schema. 

* Example: Using Pending Statistics Assume many modifications have been made to the employees table since the last time statistics were gathered. To ensure that the cost-based optimizer is still picking the best plan, statistics should be gathered once again; however, the user is concerned that new statistics will cause the optimizer to choose bad plans when the current ones are acceptable. The user can do the following: 

EXEC DBMS_STATS.SET_TABLE_PREFS('hr', 'employees', 'PUBLISH', 'false'); 

By setting the employees tables publish preference to FALSE, any statistics gather from now on will not be automatically published. The newly gathered statistics will be marked as pending. 

Q76. A redaction policy was added to the SAL column of the SCOTT.EMP table:

 

All users have their default set of system privileges. 

For which three situations will data not be redacted? 

A. SYS sessions, regardless of the roles that are set in the session 

B. SYSTEM sessions, regardless of the roles that are set in the session 

C. SCOTT sessions, only if the MGR role is set in the session 

D. SCOTT sessions, only if the MGR role is granted to SCOTT 

E. SCOTT sessions, because he is the owner of the table 

F. SYSTEM session, only if the MGR role is set in the session 

Answer: A,D,F 

Explanation: 

* SYS_CONTEXT This is a twist on the SYS_CONTEXT function as it does not use USERENV. With this usage SYS_CONTEXT queries the list of the user's current default roles and returns TRUE if the role is granted. 

Example: 

SYS_CONTEXT('SYS_SESSION_ROLES', 'SUPERVISOR') 

conn scott/tiger@pdborcl 

SELECT sys_context('SYS_SESSION_ROLES', 'RESOURCE') 

FROM dual; 

SYS_CONTEXT('SYS_SESSION_ROLES','SUPERVISOR') 

FALSE 

conn sys@pdborcl as sysdba 

GRANT resource TO scott; 

conn scott/tiger@pdborcl SELECT sys_context('SYS_SESSION_ROLES', 'RESOURCE') FROM dual; SYS_CONTEXT('SYS_SESSION_ROLES','SUPERVISOR') TRUE 

Q77. Examine the current value for the following parameters in your database instance: 

SGA_MAX_SIZE = 1024M 

SGA_TARGET = 700M 

DB_8K_CACHE_SIZE = 124M 

LOG_BUFFER = 200M 

You issue the following command to increase the value of DB_8K_CACHE_SIZE: 

SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M; 

Which statement is true? 

A. It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically. 

B. It succeeds only if memory is available from the autotuned components if SGA. 

C. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_TARGET. 

D. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_MAX_SIZE. 

Answer:

Explanation: * The SGA_TARGET parameter can be dynamically increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. 

* Example: 

For example, suppose you have an environment with the following configuration: 

SGA_MAX_SIZE = 1024M SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, the value of SGA_TARGET can be resized up to 1024M and can also be reduced until one or more of the automatically sized components reaches its minimum size. The exact value depends on environmental factors such as the number of CPUs on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M 

* DB_8K_CACHE_SIZE Size of cache for 8K buffers 

* For example, consider this configuration: 

SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components. Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components. 

Q78. Examine the following impdp command to import a database over the network from a pre-12c Oracle database (source): 

Which three are prerequisites for successful execution of the command? 

A. The import operation must be performed by a user on the target database with the DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role. 

B. All the user-defined tablespaces must be in read-only mode on the source database. 

C. The export dump file must be created before starting the import on the target database. 

D. The source and target database must be running on the same platform with the same endianness. 

E. The path of data files on the target database must be the same as that on the source database. 

F. The impdp operation must be performed by the same user that performed the expdp operation. 

Answer: A,B,D 

Explanation: In this case we have run the impdp without performing any conversion if 

endian format is different then we have to first perform conversion. 

Q79. Which three statements are true about the working of system privileges in a multitenant control database (CDB) that has pluggable databases (PDBs)? 

A. System privileges apply only to the PDB in which they are used. 

B. Local users cannot use local system privileges on the schema of a common user. 

C. The granter of system privileges must possess the set container privilege. 

D. Common users connected to a PDB can exercise privileges across other PDBs. 

E. System privileges with the with grant option container all clause must be granted to a common user before the common user can grant privileges to other users. 

Answer: A,C,E 

Explanation: A, Not D: In a CDB, PUBLIC is a common role. In a PDB, privileges granted locally to PUBLIC enable all local and common users to exercise these privileges in this 

PDB only. 

C: A user can only perform common operations on a common role, for example, granting privileges commonly to the role, when the following criteria are met: 

The user is a common user whose current container is root. 

The user has the SET CONTAINER privilege granted commonly, which means that the 

privilege applies in all containers. 

The user has privilege controlling the ability to perform the specified operation, and this 

privilege has been granted commonly 

Incorrect: 

Note: 

* Every privilege and role granted to Oracle-supplied users and roles is granted commonly except for system privileges granted to PUBLIC, which are granted locally. 

Q80. Examine the parameter for your database instance: 

You generated the execution plan for the following query in the plan table and noticed that the nested loop join was done. After actual execution of the query, you notice that the hash join was done in the execution plan: 

Identify the reason why the optimizer chose different execution plans. 

A. The optimizer used a dynamic plan for the query. 

B. The optimizer chose different plans because automatic dynamic sampling was enabled. 

C. The optimizer used re-optimization cardinality feedback for the query. 

D. The optimizer chose different plan because extended statistics were created for the columns used. 

Answer:

Explanation: * optimizer_dynamic_sampling OPTIMIZER_DYNAMIC_SAMPLING controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics. Range of values0 to 11