CQL中每个记录的限制列 [英] Limiting columns per record in CQL
问题描述
我有一个问题,从现在开始一直困扰着我.为了简化起见,我将其缩小了.
I've a problem which has been bothering me from quite while now. I'm scaling it down for simplification.
我在Cassandra中有一个列族定义为:
I've a column family in Cassandra defined as:
CREATE TABLE "Test" (
key text,
column1 text,
value text,
PRIMARY KEY (key, column1)
)
如果我在CQL中以以下方式运行查询:
If I run a query in CQL as:
select * from "Test" where key in ('12345','34567');
它给了我类似的东西
key | column1 | value
-----------------------+---
12345 | 764 | 764
12345 | 836 | 836
12345 | 123723 | 123723
12345 | 155863 | 155863
key | column1 | value
-----------------------+---
34567 | 159144 | 159144
34567 | 159869 | 159869
34567 | 160705 | 160705
现在我的问题是如何将我的结果限制为每条记录最多2行.我尝试使用以下内容,但没有用.
Now my question is how can I limit my results to 2 rows max per record. I tried to use the following but didn't work.
select FIRST 10 'a'..'z' from "Test" where key in ('12345','34567');
-在最新的CQL版本中不可用.
- Not available in latest CQL version.
select * from "Test" where key in ('12345','34567') limit 2;
-仅限制总行数,而不是每条记录
-Only limits total number of rows, not per record
推荐答案
CQL3中没有这种类型的限制.您必须为每个分区运行一个单独的查询.
There is no way to have this type of limit in CQL3. You have to run a separate query for each partition.
如果查询延迟对您而言不是问题,那么您始终可以在Cassandra数据库之上安装SparkSQL/Hive,以进行复杂的分析查询,例如原始问题中的查询.您甚至可以缓存这些查询的结果.
If query latency is not an issue for you, you always can install SparkSQL/Hive on top of your Cassandra database for complex analytical queries like the one in your original question. You even can cache the result of these queries.
这篇关于CQL中每个记录的限制列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!