1、SQL注释一般不会增加性能开销,并且可以增强可读性
2、事务宜简短
3、避免游标操作
因为游标占用大量的资源
4、IN和EXISTS的权衡
5、避免嵌套视图和在单表检索时避免指向多个表的视图
6、DISTINCT和ORDER BY开销
完全可以在应用程序中进行处理
7、尽量为每一列设置默认值,并且NOT NULL。因为包含NULL值的列不能利用索引,即使你创建了索引。使用了IS NULL和IS NOT NULL的查询也不会使用索引
8、海量查询避免类型转换
9、为外键关联的列建立索引
10、UNION与UNION ALL,后者效率更高
11、INSERT语句
INSERT INTO table VALUES (…)性能差于INSERT INTO table (…) VALUES (…)
12、使用ORDER BY时的注意事项
任何在ORDER BY中的非索引项或者计算表达式都将降低查询效率
13、BETWEEN与IN的权衡
…
本文链接
一些ORACLE中的进程被杀掉后,状态被置为"killed",但是锁定的资源很长时间不释放,有时实在没办法,只好重启数据库。现在提供一种方法解决这种问题,那就是在ORACLE中杀不掉的,在OS一级再杀。
1.下面的语句用来查询哪些对象被锁:
select object_name,machine,s.sid,s.serial#
from v$locked_object l,dba_objects o ,v$session s
where l.object_id = o.object_id and l.session_id=s.sid;
2.下面的语句用来杀死一个进程:
alter system kill session '24,111'; (其中24,111分别是上面查询出的sid,serial#)
【注】以上两步,可以通过Oracle的管理控制台来执行。
3.如果利用上面的命令杀死一个进程后,进程状态被置为"killed",但是锁定的资源很长时间没有被释放,那么可以在os一级再杀死相应的进程(线程),首先执行下面的语句获得进程(线程)号:
select spid, osuser, s.program
from v$session s,v$process p
where s.paddr=p.addr and s.sid=24 (24是上面的sid)
4.在OS上杀死这个进程(线程):
1)在unix上,用root身份执行命令:
#kill -9 12345(即第3步查询出的spid)
2)在windows(unix也适用)用orakill杀死线程,orakill是oracle提供的一个可执行命令,语法为:
orakill sid thread
其中:
sid:表示要杀死的进程属于的实例名
thread:是要杀掉的线程号,即第3步查询出的spid。
例:c:>orakill orcl 12345
ORA-00031: session marked for kill
Cause: The session specified in an ALTER SYSTEM KILL SESSION command cannot be killed immediately (because it is rolling back or blocked on a network operation), but it has been marked for kill. This means it will be killed as soon as possible after its current
uninterruptible operation is done.
Action: No action is required for the session to be killed, but further executions of the ALTER SYSTEM KILL SESSION command on this session may cause the session to be killed sooner.
kill -9 12345
最近大数据概念非常火热,各个厂家都讲大数据视为未来IT的一个重要方向,因此各个厂家都想在这个领域有所作为。前几天参加了IBM大数据研讨会,会上IBM推出了他们针对于大数据的解决方案,三种一体机(PureSystem,另外IBM在推出了DB2 v10,为了打Oracle RAC专门设计的PureScale正式加入了DB2大版本中)。
在MPP架构方面,以前微软是被诟病的,缺乏产品应对大数据的挑战。之后从网上查了一下发现微软从2008 R2之后也发布了MPP数据仓库架构,并且在今年会推出自己的一体机。
针对于SQL Server PDW机构,有一篇文章讲的非常详细,大家有兴趣可以看一下:(文章来源于:http://www.jamesserra.com/archive/2011/08/microsoft-sql-server-parallel-data-warehouse-pdw-explained/)
Microsoft SQL Server Parallel Data Warehouse (PDW), formally called by its code name “Project Madison”, is an edition of Microsoft’s SQL Server 2008 R2 that was released in December 2010. PDW is Microsoft’s reworking of the DatAllegro Inc. massive parallel processing (MPP) product that Microsoft acquired in July 2008. It only works with certain hardware (two so far), the first of which is HP Enterprise Data Warehouse Appliance (Dell is the other, with a couple more to come in the near future: IBM and Bull). This edition of SQL Server can’t be bought as an independent piece of software, it has to be bought along with the hardware.
So what is MPP? Until now, SQL Server has been a Symmetric Multiprocessing (SMP) solution, which essentially means it uses one server. MPP provides scalability and query performance by running independent servers in parallel. That is the quick definition. For more details, read What MPP means to SQL Server Parallel Data Warehouse.
MPP is also available from other companies such as EMC Greenplum, Teradata, Oracle Exadata, HP Vertica, and IBM Netezza, but those use proprietary systems, where PDW can be used with commodity hardware, providing a much lower cost per terabyte. But it’s still not in-expensive: The hardware and installation will cost around $2 million (not including software licenses), but gets you “200 times faster queries and 10 times the scalable than traditional Microsoft SQL Server deployments” (see press release). PDW also comes with its own support model.
Microsoft has had clustering capabilities in SQL Server for a while, but the scalability part was lacking. This is where PDW comes in. Scalability in PDW means handling tens of terabytes of data and then moving to hundreds of terabytes worth (up to 600 TB). At about 50 terabytes to 60 terabytes of data, clustering is needed; thereafter, clustering starts to approach its limits, and that is when you need to move to PDW. Clustering brings concurrency to the system and reduces load, but it can’t reduce the time that a single query would take without any resource latency. To break this barrier, parallelism would be required to execute bits of the same request simultaneously and this is what exactly this setup would bring to the table. PDW partitions large tables across multiple physical nodes, each having its own dedicated CPU, memory, storage, and each running its own instance of SQL Server in a parallel shared nothing design. Tables can either be replicated, where a copy will be on each node (usually for dimension tables), or distributed, where portions of a table are uniformly distributed across all nodes (usually for fact tables).
One drawback to PDW is that it does not use SQL Server Management Studio, but uses a third-party tool called Nexus Chameleon (this third-party tool is needed because SSMS hasn’t been reworked to connect directly to the control node of the Parallel Data Warehouse). It also uses its own query engine and not all features of SQL Server are supported. So, you might not be able to use all your DBA tricks. And you wouldn’t want to build a solution against SQL Server and then just hope to upsize it to Parallel Data Warehouse Edition.
PDW uses multiple servers within the appliance, virtualized as if they were one unified data warehousing resource available. It can use up to 480 cores. PDW Works by controlling several different physical servers each running their own instance of SQL Server 2008 R2. The database and it’s tables are spread across these physical servers but appear as one database and table(s) to the end-user. The data warehouse appliance or brain of the PDW manages query execution and the meta data for what is stored and processed on what portion of the PDW.
Microsoft IT’s experience with PDW showed when they migrated Information Security Consolidated Event Management (ICE) to PDW, they saw query performance improve to an average of 15-20 times faster in PDW, SSIS data load throughput of up to 285 GB/hour (with minimal query performance impact), and support for up to 12 TB/day in throughput in SSIS. See video.
Part of the technology incorporated into PDW includes a parallel database copy that enables rapid data movement and consistency between PDW and data marts used by SSAS.
In short, PDW is ideal for large data warehouses and BI, but not for OLTP systems. Write one check, and you get a complete soup-to-nuts data warehouse storage engine that includes everything from the servers, SAN, configuration, and training.
HP calls PDW by a different name: Enterprise Data Warehouse (EDW). Here is the layout of the HP Enterprise Data Warehouse Appliance (full specs here with review and architecture overview and performance guide). The architecture is hub-and-spoke and supports up to 47 servers, made up of a control rack and a data rack. A one rack system has 17 servers, 22 processors/132 cores, and 125TB and can be scaled out to a four rack system with 47 servers, 82 processors/492 cores, and 500TB:
The future road map for PDW includes column store, petabyte scalability, real-time data warehousing, MDM, and data quality.