Accurate 70-765 Braindumps 2021

It is more faster and easier to pass the by using . Immediate access to the and find the same core area with professionally verified answers, then PASS your exam with a high score now.

Microsoft 70-765 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
You administer a Microsoft SQL Server 2014 server. You plan to deploy new features to an application. You need to evaluate existing and potential clustered and non-clustered indexes that will improve
performance.
What should you do?

  • A. Query the sys.dm_db_index_usage_stats DMV.
  • B. Query the sys.dm_db_missing_index_details DMV.
  • C. Use the Database Engine Tuning Advisor.
  • D. Query the sys.dm_db_missing_index_columns DMV.

Answer: C

Explanation: The Microsoft Database Engine Tuning Advisor (DTA) analyzes databases and makes recommendations that you can use to optimize query performance. You can use the Database Engine Tuning Advisor to select and create an optimal set of indexes, indexed views, or table partitions without having an expert understanding of the database structure or the internals of SQL Server.

NEW QUESTION 2
HOTSPOT
You need to ensure that a user named Admin2 can manage logins.
How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL segments in the answer area.
Answer Area
70-765 dumps exhibit

    Answer:

    Explanation: Step 1: CREATE LOGIN
    First you need to create a login for SQL Azure, it's syntax is as follows: CREATE LOGIN username WITH password='password';
    Step 2, CREATE USER Step 3: LOGIN
    Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:
    CREATE USER readonlyuser FROM LOGIN readonlylogin; Step 4: loginmanager
    Members of the loginmanager role can create new logins in the master database.
    References:
    https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/ https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins

    NEW QUESTION 3
    You have Microsoft SQL server on a Microsoft Azure virtual machine. The virtual machine has 200 GB of data.
    User report a slow response time when querying the database.
    You need to identify whether the storage subsystem causes the performance issue. Which performance monitor counter should you view?

    • A. Data sec/Write
    • B. Avg.disk Read Queue Length
    • C. % Disk Read Time
    • D. Disk sec/Read

    Answer: B

    NEW QUESTION 4
    You administer a Microsoft SQL Server 2014 database named ContosoDb. Tables are defined as shown in the exhibit. (Click the Exhibit button.)
    70-765 dumps exhibit
    You need to display rows from the Orders table for the Customers row having the CustomerId value set to 1 in the following XML format.
    70-765 dumps exhibit
    Which Transact-SQL query should you use?

    • A. SELECT OrderId, OrderDate, Amount, Name, CountryFROM OrdersINNER JOIN CustomersON Orders.CustomerId = Customers-CustomerIdWHERE Customers.CustomerId = 1FOR XML RAW
    • B. SELECT OrderId, OrderDate, Amount, Name, CountryFROM Orders INNER JOIN CustomersON Orders.CustomerId = Customers.CustomerIdWHERE Customers.CustomerId = 1FOR XML RAW, ELEMENTS
    • C. SELECT OrderId, OrderDate, Amount, Name, CountryFROM OrdersINNER JOIN CustomersON Orders.CustomerId = Customers.CustomerIdWHERE Customers.CustomerId = 1FOR XML AUTO
    • D. SELECT OrderId, OrderDate, Amount, Name, CountryFROM OrdersINNER JOIN CustomersON Orders.CustomerId = Customers.CustomerIdWHERE Customers.CustomerId= 1FOR XML AUTO, ELEMENTS
    • E. SELECT Name, Country, OrderId, OrderDate, AmountFROM OrdersINNER JOIN CustomersON Orders.CustomerId= Customers.CustomerIdWHERE Customers.CustomerId= FOR XML AUTO
    • F. SELECT Name, Country, Crderld, OrderDate, AmountFROM OrdersINNER JOIN CustomersON Orders.CustomerId= Customers.CustomerIdWHERE Customers.CustomerId= FOR XML AUTO, ELEMENTS
    • G. SELECT Name AS `@Name', Country AS `@Country', OrderId, OrderDate, AmountFROM OrdersINNER JOIN CustomersON Orders.CustomerId= Customers.CustomerIdWHERE Customers.CustomerId = 1FOR XML PATH (`Customers')
    • H. SELECT Name AS `Customers/Name', CountryAS `Customers/Country', OrderId, OrderDate, AmountFROM OrdersINNER JOIN CustomersON Orders.CustomerId= Customers.CustomerIdWHERE Customers.CustomerId= 1FOR XML PATH (`Customers'

    Answer: G

    NEW QUESTION 5
    You administer a Microsoft SQL Server 2014 server. The MSSQLSERVER service uses a domain account named CONTOSOSQLService.
    You plan to configure Instant File Initialization.
    You need to ensure that Data File Autogrow operations use Instant File Initialization. What should you do? Choose all that apply.

    • A. Restart the SQL Server Agent Service.
    • B. Disable snapshot isolation.
    • C. Restart the SQL Server Service.
    • D. Add the CONTOSOSQLService account to the Perform Volume Maintenance Tasks local security policy.
    • E. Add the CONTOSOSQLService account to the Server Operators fixed server role.
    • F. Enable snapshot isolation.

    Answer: CD

    Explanation: How To Enable Instant File Initialization References:
    http://msdn.microsoft.com/en-us/library/ms175935.aspx

    NEW QUESTION 6
    You administer a Microsoft SQL Server 2014 database that contains a table named AccountTransaction. You discover that query performance on the table is poor due to fragmentation on the
    IDX_AccountTransaction_AccountCode non-clustered index. You need to defragment the index. You also need to ensure that user queries are able to use the index during the defragmenting process.
    Which Transact-SQL batch should you use?

    • A. ALTER INDEX IDX_AccountTransaction_AccountCode ONAccountTransaction.AccountCode REORGANIZE
    • B. ALTER INDEX ALL ON AccountTransaction REBUILD
    • C. ALTER INDEX IDX_AccountTransaction_AccountCode ONAccountTransaction.AccountCode REBUILD
    • D. CREATE INDEX IDXAccountTransactionAccountCode ONAccountTransaction.AccountCode WITH DROP EXISTING

    Answer: A

    Explanation: Reorganize: This option is more lightweight compared to rebuild. It runs through the leaf level of the index, and as it goes it fixes physical ordering of pages and also compacts pages to apply any previously set fillfactor settings. This operation is always online, and if you cancel it then it’s able to just stop where it is (it doesn’t have a giant operation to rollback).
    References: https://www.brentozar.com/archive/2013/09/index-maintenance-sql-server-rebuild-reorganize/

    NEW QUESTION 7
    You plan to deploy two new Microsoft Azure SQL Database instances. Once instance will support a data entry application. The other instance will support the company’s business intelligence efforts. The databases will be accessed by mobile applications from public IP addresses.
    You need to ensure that the database instances meet the following requirements:
    The database administration team must receive alerts for any suspicious activity in the data entry database, including potential SQL injection attacks.
    Executives around the world must have access to the business intelligence application.
    Sensitive data must never be transmitted. Sensitive data must not be stored in plain text in the database. In the table below, identify the feature that you must implement for each database.
    NOTE: Make only one selection in each column. Each correct selection is work one point.
    70-765 dumps exhibit

      Answer:

      Explanation: Data entry: Threat Detection
      SQL Threat Detection provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access patterns.
      Business intelligence: Dynamic Data Masking
      Dynamic data masking limits (DDM) sensitive data exposure by masking it to non-privileged users. It can be used to greatly simplify the design and coding of security in your application.
      References:
      https://docs.microsoft.com/en-us/azure/sql-database/sql-database-threat-detection https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking

      NEW QUESTION 8
      You administer a Microsoft SQL Server 2014 database.
      The database contains a Product table created by using the following definition:
      70-765 dumps exhibit
      You need to ensure that the minimum amount of disk space is used to store the data in the Product table. What should you do?

      • A. Convert all indexes to Column Store indexes.
      • B. Implement Unicode Compression.
      • C. Implement row-level compression.
      • D. Implement page-level compression.

      Answer: D

      NEW QUESTION 9
      You plan to migrate a database To Microsoft Azure SQL Database. The database requires 500 gigabytes (GB) of storage.
      The database must support 50 concurrent logins. You must minimize the cost associated with hosting the database.
      You need to create the database. Which pricing tier should you use?

      • A. Standard S3 pricing tier
      • B. Premium P2tier
      • C. Standard S2 pricing tier
      • D. Premium P1 tier

      Answer: D

      Explanation: For a database size of 500 GB the Premium tier is required. Both P1 and P2 are adequate. P1 is preferred as it is cheaper.
      Note:
      70-765 dumps exhibit

      NEW QUESTION 10
      HOTSPOT
      You plan to migrate a Microsoft SQL Server workload from an on-premises server to a Microsoft Azure virtual machine (VM). The current server contains 4 cores with an average CPU workload of 6 percent and a peak workload of 10 percent when using 2.4Ghz processors.
      You gather the following metrics:
      70-765 dumps exhibit
      You need to design a SQL Server VM to support the migration while minimizing costs.
      For each setting, which value should you use? To answer, select the appropriate storage option from each list in the answer area.
      NOTE: Each correct selection is worth one point.
      70-765 dumps exhibit

        Answer:

        Explanation: Data drive: Premium Storage Transaction log drive: Standard Storage TempDB drive: Premium Storage
        Note: A standard disk is expected to handle 500 IOPS or 60MB/s. A P10 Premium disk is expected to handle 500 IOPS.
        A P20 Premium disk is expected to handle 2300 IOPS. A P30 Premium disk is expected to handle 5000 IOPS.
        VM size: A3
        Max data disk throughput is 8x500 IOPS
        References:https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines- windows-sizes

        NEW QUESTION 11
        You administer a Microsoft SQL Server 2014 instance that contains a financial database hosted on a storage area network (SAN).
        The financial database has the following characteristics:
        The database is continually modified by users during business hours from Monday through Friday between 09:00 hours and 17:00 hours. Five percent of the existing data is modified each day.
        The Finance department loads large CSV files into a number of tables each business day at 11:15 hours and 15:15 hours by using the BCP or BULK INSERT commands. Each data load adds 3 GB of data to the database.
        These data load operations must occur in the minimum amount of time.
        A full database backup is performed every Sunday at 10:00 hours. Backup operations will be performed every two hours (11:00, 13:00, 15:00, and 17:00) during business hours.
        On Wednesday at 10:00 hours, the development team requests you to refresh the database on a development server by using the most recent version.
        You need to perform a full database backup that will be restored on the development server. Which backup option should you use?

        • A. NORECOVERY
        • B. FULL
        • C. NO_CHECKSUM
        • D. CHECKSUM
        • E. Differential
        • F. BULK_LOGGED
        • G. STANDBY
        • H. RESTART
        • I. SKIP
        • J. Transaction log
        • K. DBO ONLY
        • L. COPY_ONLY
        • M. SIMPLE
        • N. CONTINUE AFTER ERROR

        Answer: L

        Explanation: COPY_ONLY specifies that the backup is a copy-only backup, which does not affect the normal sequence of backups. A copy-only backup is created independently of your regularly scheduled, conventional backups. A copy-only backup does not affect your overall backup and restore procedures for the database.
        References:
        https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql

        NEW QUESTION 12
        You have Microsoft SQL Server on a Microsoft Azure Virtual machine that has a 4-TB database.
        You plan to configure daily backups for the database. A single full backup will be approximately 1.5 TB of compressed data.
        You need to ensure that the last backups are retained. Where should you store the daily backups?

        • A. Local storage
        • B. Page blob storage
        • C. Virtual disks
        • D. Block blob storage.

        Answer: D

        Explanation: When backing up to Microsoft Azure blob storage, SQL Server 2021 supports backing up to multiple blobs to enable backing up large databases, up to a maximum of 12.8 TB. This is done through Block Blobs.
        References:

        NEW QUESTION 13
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
        Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
        You need to create the templates.
        Solution: You use Visual Studio to create a XAML template that defines the deployment and configuration settings for the SQL Server environment.
        Does the solution meet the goal?

        • A. Yes
        • B. No

        Answer: B

        Explanation: Azure ResourceManager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.
        A good JSON editor can simplify the task of creating templates.
        Note: In its simplest structure, an Azure Resource Manager template contains the following elements:
        {
        "$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
        "contentVersion": "", "parameters": { },
        "variables": { },
        "resources": [ ],
        "outputs": { }
        }
        References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

        NEW QUESTION 14
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You are migrating an on-premises Microsoft SQL Server instance to SQL Server on a Microsoft Azure virtual machine. The instance has 30 databased that consume a total of 2 TB of disk space.
        The instance sustains more than 30,000 transactions per second.
        You need to provision storage for the virtual machine. The storage must be able to support the same load as the on-premises deployment.
        Solution: You create one storage account that has 30 containers. You create a VHD in each container. Does this meet the goal?

        • A. Yes
        • B. No

        Answer: B

        Explanation: Each Storage Account handles up to 20.000 IOPS, and 500TB of data.
        References: https://www.tech-coffee.net/understand-microsoft-azure-storage-for-virtual-machines/

        NEW QUESTION 15
        You have a database named DB1 that contains a table named Table1. Table1 has 1 billion rows.
        You import 10 million rows of data into Table1.After the import, users report that queries take longer than
        usual to execute.
        You need to identify whether an out-of-date execution plan is causing the performance issue. Which dynamic management view should you use?

        • A. sys.dm_xtp_transaction_stats
        • B. sys.dm_exec_input_buffer
        • C. sys.dm_db_index_operational_stats
        • D. sys.dm_db_stats_properties

        Answer: C

        Explanation: sys.dm_db_index_operational_stats dynamic management function provides us the current low-level I/O, locking, latching, and access method for each partition of the table. This information is really useful to troubleshoot SQL Server performance issues.
        Reference:
        https://basitaalishan.com/2013/03/19/using-sys-dm_db_index_operational_stats-to-analyse-howindexes-are-utili

        NEW QUESTION 16
        You administer a Microsoft SQL Server 2014 database instance. You create a new user named UserA. You need to ensure that UserA is able to create SQL Server Agent jobs and execute SQL Server agent jobs
        owned by UserA
        To which role should you add UserA?

        • A. DatabaseMailUserRole
        • B. ServerGroupAdministratorGroup
        • C. SQLAgentUserRole
        • D. Securityadmin

        Answer: C

        Explanation: SQLAgentUserRole is the least privileged of the SQL Server Agent fixed database roles. It has permissions on only operators, local jobs, and job schedules. Members of SQLAgentUserRole have permissions on only local jobs and job schedules that they own. Members can create local jobs.
        References:https://docs.microsoft.com/en-us/sql/ssms/agent/sql-server-agent-fixed-database-roles

        NEW QUESTION 17
        You have a SQL Server 2021 database named DB1.
        You plan to import a large number of records from a SQL Azure database to DB1.
        You need to recommend a solution to minimize the amount of space used in the transaction log during the import operation.
        What should you include in the recommendation?

        • A. The bulk-logged recovery model
        • B. The full recovery model
        • C. A new partitioned table
        • D. A new log file
        • E. A new file group

        Answer: A

        Explanation: Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.
        Note: The bulk-logged recovery model is a special-purpose recovery model that should be used only intermittently to improve the performance of certain large-scale bulk operations, such as bulk imports of large amounts of data.
        References: https://technet.microsoft.com/en-us/library/ms190692(v=sql.105).aspx

        NEW QUESTION 18
        Settings Value VM size D3
        Storage Location Drive E Storage type Standard Tempdb location Drive C
        The workload on this instance has of the tembdb load.
        You need to maximize the performance of the tempdb database.
        Solution: You use a D- Series VM and store the tempdb database on drive D. Does this meet the goal?

        • A. Yes
        • B. No

        Answer: A

        Explanation: For D-series, Dv2-series, and G-series VMs, the temporary drive on these VMs is SSD-based. If your workload makes heavy use of TempDB (such as temporary objects or complex joins), storing TempDB on the D drive could result in higher TempDB throughput and lower TempDB latency.
        References:
        https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performan

        NEW QUESTION 19
        You administer a Microsoft SQL Server 2014 database.
        You need to ensure that the size of the transaction log file does not exceed 2 GB. What should you do?

        • A. Execute sp_configure 'max log size', 2G.
        • B. use the ALTER DATABASE...SET LOGFILE command along with the maxsize parameter.
        • C. In SQL Server Management Studio, right-click the instance and select Database Setting
        • D. Set the maximum size of the file for the transaction log.
        • E. in SQL Server Management Studio, right-click the database, select Properties, and then click Files.Open the Transaction log Autogrowth window and set the maximum size of the file.

        Answer: B

        Explanation: You can use the ALTER DATABASE (Transact-SQL) statement to manage the growth of a transaction log file
        To control the maximum the size of a log file in KB, MB, GB, and TB units or to set growth to UNLIMITED, use the MAXSIZE option. However, there is no SET LOGFILE subcommand.
        References: https://technet.microsoft.com/en-us/library/ms365418(v=sql.110).aspx#ControlGrowth

        NEW QUESTION 20
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You are tuning the performance of a virtual machines that hosts a Microsoft SQL Server instance. The virtual machine originally had four CPU cores and now has 32 CPU cores.
        The SQL Server instance uses the default settings and has an OLTP database named db1. The largest table in db1 is a key value store table named table1.
        Several reports use the PIVOT statement and access more than 100 million rows in table1.
        You discover that when the reports run, there are PAGELATCH_IO waits on PFS pages 2:1:1, 2:2:1, 2:3:1, and 2:4:1 within the tempdb database.
        You need to prevent the PAGELATCH_IO waits from occurring. Solution: You add more files to db1.
        Does this meet the goal?

        • A. Yes
        • B. No

        Answer: A

        Explanation: From SQL Server’s perspective, you can measure the I/O latency from sys.dm_os_wait_stats. If you consistently see high waiting for PAGELATCH_IO, you can benefit from a faster I/O subsystem for SQL Server.
        A cause can be poor design of your database - you may wish to split out data located on 'hot pages', which are accessed frequently and which you might identify as the causes of your latch contention. For example, if you have a currency table with a data page containing 100 rows, of which 1 is updated per transaction and you have a transaction rate of 200/sec, you could see page latch queues of 100 or more. If each page latch wait costs just 5ms before clearing, this represents a full half-second delay for each update. In this case, splitting out the currency rows into different tables might prove more performant (if less normalized and logically structured).
        References: https://www.mssqltips.com/sqlservertip/3088/Explanation:-of-sql-server-io-and-latches/

        Thanks for reading the newest 70-765 exam dumps! We recommend you to try the PREMIUM Dumpscollection 70-765 dumps in VCE and PDF here: http://www.dumpscollection.net/dumps/70-765/ (209 Q&As Dumps)