music21:解析每首曲目的音符和持续时间 [英] music21: parsing notes and durations per track

查看:98
本文介绍了music21:解析每首曲目的音符和持续时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 music21 将多轨 MIDI 文件转换为每个音轨的音符和持续时间数组.

I'm trying to use music21 to convert multi-track midi files into array of notes and durations per each track.

例如,给定一个包含 16 首曲目的 MIDI 文件 test.mid,

For example, given a midi file test.mid with 16 tracks in it,

我想得到 16 个元组数组,包括(音高、持续时间(可能加上音符的位置)).

I would like to get 16 arrays of tuples, consisting of (pitch, duration (plus maybe position of the note)).

music21 的文档相当难以理解,我真的很感激这方面的任何帮助..

Documentation for music21 is rather difficult to follow, and I would really appreciate any help on this..

推荐答案

在music21 中有不止一种方法可以做到这一点,所以这只是一种简单的方法.请注意,持续时间值表示为浮点数,例如四分音符等于 1.0,二分音符等于 2.0,依此类推:

There is more than one way to do this in music21, so this is just one simple way. Note that the durational value is expressed as a float, such that a quarter note equals 1.0, a half note equals 2.0, etc.:

import music21
from music21 import *

piece = converter.parse("full_path_to_piece.midi")
all_parts = []
for part in piece.parts:
  part_tuples = []
  for event in part:
    for y, in event.contextSites():
      if y[0] is part:
        offset = y[1]
    if getattr(event, 'isNote', None) and event.isNote:
      part_tuples.append((event.nameWithOctave, event.quarterLength, offset))
    if getattr(event, 'isRest', None) and event.isRest:
      part_tuples.append(('Rest', event.quarterLength, offset))
  all_parts.append(part_tuples)

另一种解决方案是使用 vis-framework,它通过 music21 以符号形式访问音乐文件并将信息存储在 pandas 数据框.你可以这样做:

An alternative solution would be to use the vis-framework, which accesses music files in symbolic notation via music21 and stores the information in pandas dataframes. You can do this:

pip install vis-framework

另一种解决方案是使用 Humdrum 而不是 music21.

Another solution would be to use Humdrum instead of music21.

这篇关于music21:解析每首曲目的音符和持续时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆