[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[pci] Re: PCI interface



Hi!

Thanks for contacting, Blue Beaver. I don't know if you are on
PCI mailing list yet, so I send this mail to your email address also.

First I would like to say something about 2 clock domains.
Since WISHBONE is SoC bus and can operate at much higher
frequency (200 MHz) than the PCI bus, there must
be an adaptation of different bus speeds. The reason, why I think
that the FIFO is the most appropriate clock domain adaptation is,
that WISHBONE device should not occupy WISHBONE bus
longer then necessary (in case of block transfers, etc.). E.g. writing
to FIFO a data block from WB master with WB bus frequency
occupies WB bus less then writing to FIFO with PCI frequency.

I might also be wrong, so please correct me if I am.

I downloaded your FIFOs and as I saw, your FIFOs can support
2 clock domains.

I noticed that your FIFOs store first the address and on next
FIFO line the data (or more data if there is burst). That is very
good space optimized, but more PCI bus oriented since address
and data are not multiplexed on WB bus. That is not bad, but
I think that FIFO would be more efficient if it would have data
and address in the same line, because the WB bus will operate at
much higher frequency. And yes it is less space optimized when
there are burst transfers. On my opinion, the efficiency is preferred,
since the space saving can be achieved through the smaller FIFO
depth (if you really need space, you can't afford big FIFO).

What do you think about that?

I also checked Xilinx's apps about FIFOs (thanks Ovidiu).

I have an idea about 2 clock domains in FIFO.
I mentioned that I downloaded your FIFOs and I also downloaded
Khatibs FIFOs, but haven't found yet how you two validate the
data.
My idea how to validate the written data is, to compare FIFOs
address counters. Now you have to take care about when the
address counters are incremented.
One way is to increment counters as Blue Beaver described to
me in his mail, but you should know which bus operates at
higher frequency and that's why you have two modes.
My idea is, if the frequency difference factor is around 3 or less,
you don't have to worry about which bus operates at higher frequency
to meet good through-put efficiency.
The data are written on rising edge of input clock. Then the
address counter value is incremented on the falling edge of input
clock. Equally is done on the output of the FIFO.

Please feel free to post your comments.

Regards, Tadej.


----- Original Message -----
From: "llbutcher" <llbutcher@veriomail.com>
To: "Miha Dolenc" <mihad@opencores.org>
Cc: "Lawrence Butcher" <bbeaver@opencores.org>; <tadej@opencores.org>
Sent: Thursday, May 10, 2001 12:53 PM
Subject: Re: PCI interface


> First, Tadej, Hi.
> Yes, Miha, you can forward my mail to the mailing list if you think it has
> value.
>
> Tadej, I see a note that you are asking about FIFOs.
> There are many ways to implement FIFOs.
>
> Key issues are the size of the FIFO and whether it will be used to talk
from
> one clock domain to another, or whether it has both side running on the
> same clock.  A secondary issue is whether you can say before-hand whether
> one side will have a faster clock than the other.
>
> The PCI_Blue_Interface, which I have roughly sketched out, and presently
> available
> in super-alpha-because-only-part-done mode on the opencores webpage, has
> an example of a FIFO.
>
> Fifos meant to have both the in side and the out side running on the same
> clock
> can be simple.  You can write data and an indication that the data is
valid
> at
> the same time.
>
> Fifos meant to go from one clock domain to another are more difficult.
>
> I wrote the FIFO Control logic to operate in one of 2 modes:
> 1) Write Side Clock runs faster than Read Side Clock.
>     In this case, I write data into the FIFO on one clock, and on the
>    next clock I write the indication that the data is valid.  On the Read
>   side, the data is available the same clock that the full indication
> occurs.
> 2) Write Side Clock runs slower than Read Side Clock.
>    In this case, I write the Data and Full indication the same clock.  The
>   Read side must look for a data available indication, and read the data
>   the NEXT clock.
>
> Of course, the FIFO will work fine if the clock relationship is different
> than above.  It just won't be as fast as it could be.
>
> I also used the standard technique of sending a grey-code indication
> of the Read and Write addresses across the clock boundry.  Grey-code
> counters have the davantage that you are wrong in at most 1 bit if you
> look at a value in one clock domain which was written in the other.
>
> NOTE NOT ALWAYS TRUE.  This feature depends greatly on the
> delays being balanced for the different counter bits.
>
> I also wrote the FIFOs so that their size could be one of 3 values (or 4?)
> I can support 3, 5, 7, and 15 entry FIFOs, I think.
>
> I also wrote the FIFO so that it could be implemented with flops and
MUXes,
> which will be the likely way to do it in a chip, or with Xilinx 2-port RAM
> cells.
> (I used the ones which are 16 bits per CLB.)
>
> Please, if you are interested, look at the FIFO in the pci_blue_interface.
> Sorry if you don't like the bit-at-a-time style.  I thought this would
help
> if Xilinx SRAMs needed to be manually instantiated.
>
> Miha:
>
> The Request Spare entries are there because I figured out how to use
> 7 and 15 types in the 2 FIFOs, not 8 and 16.  By leaving a spare, I figure
> I can fix things later.
>
> Every PCI Master needs to be a PCI Slave, too.  So it isn't obvious that
> they should be written seperately.  On the other hand, a PCI Target
> can surely not have a master.  Therefore, it seems fine to write a Target
> without regard for Mastership issues.
>
> I really wish I had made more progress on this, because it is very clear
> in my mind how to progress.  But I can't work on it for at least several
> weeks.
>
> The wishbone interface should assume that it is doing Memory Reads and
> Memory Writes if a normal reference is made.  I would think that extra
> register references, or extra address lines,  would be needed to issue a
> different type of PCI reference.
>
> Bursts would be especially hard, because there is no way for a PCI Master
> to change it's mind about the length of a burst once a data item is
offered.
> If the master said "here comes one word, with more to follow" and the
> wishbone
> interface decided to NOT offer more data, the PCI protocol is violated.
Not
> good.
>
> One way to fix this is to collect write data into ANOTHER FIFO before it
> is offered to the PCI interface,  That way you can look at several entries
> in
> the FIFO at once, and discover if a Burst is possible.
>
> The pci_blue_interface has a host side and a PCI side.  The two sides
> communicate TOTALLY through FIFO's.  No out-of-band signals.  (OK, Reset
> goes through a different route.  Should that use the spare commands?)
> Neither side is aware of the depth of the FIFO's.  The user can substitute
> FIFOs, as long as the control logic acts as expected, to get an area or
> bus speed advantage.
>
>
> All for now:
>
> Blue Beaver
>
>
> -----Original Message-----
> From: Miha Dolenc <mihad@opencores.org>
> To: bbeaver@opencores.org <bbeaver@opencores.org>
> Date: Thursday, May 10, 2001 2:17 AM
> Subject: PCI interface
>
>
> >Hello,
> >
> >    it's me again. I've looked at pci_blue_constants.v and I'm impresed
how
> >well a specs written in verilog with extensive use of commentary can turn
> >out.
> >
> >OK, here is my comment:
> >I like your interface very much and I have a few questions for you:
> >1. I didn't quite understand PCI_HOST_REQUEST_SPARE request and what it
> will
> >be used for
> >
> >2. Do you think you could concentrate your efforts in PCI master
interface
> >first - we already have a member that is interested in developing PCI
> >target, so they can be developed concurently. I would forward him your
> >pci_blue_constants and he would comment to.
> >
> >3. Is this pci_blue_constants file pretty much finished or it will be
> >changed a lot in the future. If it's almost done, someone can start doing
a
> >WISHBONE interface that would connect to your PCI interface.
> >
> >4. What PCI bus commands do you intend to support and how a host can
> request
> >a use of specific bus command.
> >
> >4. How much problems would it cause you if you do an interface, that
would
> >only contain PCI master state machine, without configuration and status
> >registers and FIFO's ? Or the other way arround - how much problems will
we
> >have, if we want to use your PCI master without conf. regs and FIFOs.
> >Request types you defined would stay the same - some would be unused.
> >Let me tell you, what I have in mind:
> >Control logic for PCI modules of our bridge would be separated from state
> >machines. Control logic would take care of transaction ordering (which
> would
> >be the same as yours in the first version of the bridge).
> >I mean, this control logic would not issue a read request to your PCI
> master
> >until all writes are completed on PCI bus. It would also control the
FIFOs.
> >Why I think this is better? If we want to do an extension of
functionallity
> >of PCI bridge (I'm sure we will, because when the first version is done,
I
> >think more people will get interested in helping out), we can just change
> >control logic, add more FIFO depth etc. and leave PCI interface as it is.
> >That's good because PCI interface is not an easy one to do! And you could
> >concentrate more on PCI bus protocol!
> >
> >Please, tell me what you think about all this - I could be very wrong
since
> >I don't have that much experience.
> >Can I CC our comunications on pci mailing list, so others can see what's
> >going on?
> >
> >And, HAVE FUN ON YOUR VACATION!
> >
> >Regards,
> >    Miha Dolenc
> >
> >
> >
>
>